00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2375 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3640 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.087 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.088 The recommended git tool is: git 00:00:00.088 using credential 00000000-0000-0000-0000-000000000002 00:00:00.090 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.144 Fetching changes from the remote Git repository 00:00:00.146 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.206 Using shallow fetch with depth 1 00:00:00.206 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.206 > git --version # timeout=10 00:00:00.251 > git --version # 'git version 2.39.2' 00:00:00.251 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.292 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.292 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.322 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.334 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.346 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:05.346 > git config core.sparsecheckout # timeout=10 00:00:05.357 > git read-tree -mu HEAD # timeout=10 00:00:05.374 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:05.392 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:05.392 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:05.478 [Pipeline] Start of Pipeline 00:00:05.493 [Pipeline] library 00:00:05.494 Loading library shm_lib@master 00:00:05.494 Library shm_lib@master is cached. Copying from home. 00:00:05.508 [Pipeline] node 00:00:05.531 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:05.538 [Pipeline] { 00:00:05.564 [Pipeline] catchError 00:00:05.567 [Pipeline] { 00:00:05.577 [Pipeline] wrap 00:00:05.583 [Pipeline] { 00:00:05.589 [Pipeline] stage 00:00:05.590 [Pipeline] { (Prologue) 00:00:05.603 [Pipeline] echo 00:00:05.604 Node: VM-host-SM38 00:00:05.608 [Pipeline] cleanWs 00:00:05.617 [WS-CLEANUP] Deleting project workspace... 00:00:05.617 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.623 [WS-CLEANUP] done 00:00:05.843 [Pipeline] setCustomBuildProperty 00:00:05.948 [Pipeline] httpRequest 00:00:06.267 [Pipeline] echo 00:00:06.268 Sorcerer 10.211.164.20 is alive 00:00:06.274 [Pipeline] retry 00:00:06.275 [Pipeline] { 00:00:06.284 [Pipeline] httpRequest 00:00:06.288 HttpMethod: GET 00:00:06.289 URL: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:06.289 Sending request to url: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:06.291 Response Code: HTTP/1.1 200 OK 00:00:06.291 Success: Status code 200 is in the accepted range: 200,404 00:00:06.292 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:06.849 [Pipeline] } 00:00:06.861 [Pipeline] // retry 00:00:06.868 [Pipeline] sh 00:00:07.164 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.185 [Pipeline] httpRequest 00:00:07.791 [Pipeline] echo 00:00:07.793 Sorcerer 10.211.164.20 is alive 00:00:07.803 [Pipeline] retry 00:00:07.805 [Pipeline] { 00:00:07.819 [Pipeline] httpRequest 00:00:07.824 HttpMethod: GET 00:00:07.825 URL: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:07.826 Sending request to url: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:07.844 Response Code: HTTP/1.1 200 OK 00:00:07.851 Success: Status code 200 is in the accepted range: 200,404 00:00:07.855 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:01:13.959 [Pipeline] } 00:01:13.979 [Pipeline] // retry 00:01:13.987 [Pipeline] sh 00:01:14.276 + tar --no-same-owner -xf spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:01:16.823 [Pipeline] sh 00:01:17.105 + git -C spdk log --oneline -n5 00:01:17.105 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:01:17.105 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:01:17.105 4bcab9fb9 correct kick for CQ full case 00:01:17.105 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:01:17.105 318515b44 nvme/perf: interrupt mode support for pcie controller 00:01:17.129 [Pipeline] withCredentials 00:01:17.141 > git --version # timeout=10 00:01:17.155 > git --version # 'git version 2.39.2' 00:01:17.176 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:17.178 [Pipeline] { 00:01:17.188 [Pipeline] retry 00:01:17.190 [Pipeline] { 00:01:17.206 [Pipeline] sh 00:01:17.490 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:17.503 [Pipeline] } 00:01:17.519 [Pipeline] // retry 00:01:17.524 [Pipeline] } 00:01:17.540 [Pipeline] // withCredentials 00:01:17.549 [Pipeline] httpRequest 00:01:18.025 [Pipeline] echo 00:01:18.027 Sorcerer 10.211.164.20 is alive 00:01:18.038 [Pipeline] retry 00:01:18.040 [Pipeline] { 00:01:18.056 [Pipeline] httpRequest 00:01:18.061 HttpMethod: GET 00:01:18.062 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:18.062 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:18.069 Response Code: HTTP/1.1 200 OK 00:01:18.069 Success: Status code 200 is in the accepted range: 200,404 00:01:18.070 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:40.514 [Pipeline] } 00:01:40.530 [Pipeline] // retry 00:01:40.538 [Pipeline] sh 00:01:40.829 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:42.226 [Pipeline] sh 00:01:42.511 + git -C dpdk log --oneline -n5 00:01:42.511 caf0f5d395 version: 22.11.4 00:01:42.511 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:42.511 dc9c799c7d vhost: fix missing spinlock unlock 00:01:42.511 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:42.511 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:42.530 [Pipeline] writeFile 00:01:42.542 [Pipeline] sh 00:01:42.827 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:42.840 [Pipeline] sh 00:01:43.125 + cat autorun-spdk.conf 00:01:43.125 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:43.125 SPDK_TEST_NVME=1 00:01:43.125 SPDK_TEST_FTL=1 00:01:43.125 SPDK_TEST_ISAL=1 00:01:43.125 SPDK_RUN_ASAN=1 00:01:43.125 SPDK_RUN_UBSAN=1 00:01:43.125 SPDK_TEST_XNVME=1 00:01:43.125 SPDK_TEST_NVME_FDP=1 00:01:43.125 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:43.125 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:43.125 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:43.134 RUN_NIGHTLY=1 00:01:43.136 [Pipeline] } 00:01:43.149 [Pipeline] // stage 00:01:43.165 [Pipeline] stage 00:01:43.168 [Pipeline] { (Run VM) 00:01:43.181 [Pipeline] sh 00:01:43.468 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:43.468 + echo 'Start stage prepare_nvme.sh' 00:01:43.468 Start stage prepare_nvme.sh 00:01:43.468 + [[ -n 7 ]] 00:01:43.468 + disk_prefix=ex7 00:01:43.468 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:43.468 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:43.468 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:43.468 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:43.468 ++ SPDK_TEST_NVME=1 00:01:43.468 ++ SPDK_TEST_FTL=1 00:01:43.468 ++ SPDK_TEST_ISAL=1 00:01:43.468 ++ SPDK_RUN_ASAN=1 00:01:43.468 ++ SPDK_RUN_UBSAN=1 00:01:43.468 ++ SPDK_TEST_XNVME=1 00:01:43.468 ++ SPDK_TEST_NVME_FDP=1 00:01:43.468 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:43.468 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:43.468 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:43.468 ++ RUN_NIGHTLY=1 00:01:43.468 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:43.468 + nvme_files=() 00:01:43.468 + declare -A nvme_files 00:01:43.468 + backend_dir=/var/lib/libvirt/images/backends 00:01:43.468 + nvme_files['nvme.img']=5G 00:01:43.468 + nvme_files['nvme-cmb.img']=5G 00:01:43.468 + nvme_files['nvme-multi0.img']=4G 00:01:43.468 + nvme_files['nvme-multi1.img']=4G 00:01:43.468 + nvme_files['nvme-multi2.img']=4G 00:01:43.468 + nvme_files['nvme-openstack.img']=8G 00:01:43.468 + nvme_files['nvme-zns.img']=5G 00:01:43.468 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:43.468 + (( SPDK_TEST_FTL == 1 )) 00:01:43.468 + nvme_files["nvme-ftl.img"]=6G 00:01:43.468 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:43.468 + nvme_files["nvme-fdp.img"]=1G 00:01:43.468 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:43.468 + for nvme in "${!nvme_files[@]}" 00:01:43.468 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:01:43.468 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:43.468 + for nvme in "${!nvme_files[@]}" 00:01:43.468 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-ftl.img -s 6G 00:01:43.468 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:43.468 + for nvme in "${!nvme_files[@]}" 00:01:43.468 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:01:43.468 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:43.468 + for nvme in "${!nvme_files[@]}" 00:01:43.468 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:01:43.468 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:43.468 + for nvme in "${!nvme_files[@]}" 00:01:43.468 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:01:43.468 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:43.468 + for nvme in "${!nvme_files[@]}" 00:01:43.468 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:01:43.727 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:43.727 + for nvme in "${!nvme_files[@]}" 00:01:43.727 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:01:43.727 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:43.727 + for nvme in "${!nvme_files[@]}" 00:01:43.727 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-fdp.img -s 1G 00:01:43.727 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:43.727 + for nvme in "${!nvme_files[@]}" 00:01:43.727 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:01:43.727 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:43.727 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:01:43.727 + echo 'End stage prepare_nvme.sh' 00:01:43.727 End stage prepare_nvme.sh 00:01:43.740 [Pipeline] sh 00:01:44.028 + DISTRO=fedora39 00:01:44.028 + CPUS=10 00:01:44.028 + RAM=12288 00:01:44.028 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:44.028 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex7-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:44.028 00:01:44.028 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:44.028 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:44.028 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:44.028 HELP=0 00:01:44.028 DRY_RUN=0 00:01:44.028 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,/var/lib/libvirt/images/backends/ex7-nvme-fdp.img, 00:01:44.028 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:44.028 NVME_AUTO_CREATE=0 00:01:44.028 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,, 00:01:44.028 NVME_CMB=,,,, 00:01:44.028 NVME_PMR=,,,, 00:01:44.028 NVME_ZNS=,,,, 00:01:44.028 NVME_MS=true,,,, 00:01:44.028 NVME_FDP=,,,on, 00:01:44.028 SPDK_VAGRANT_DISTRO=fedora39 00:01:44.028 SPDK_VAGRANT_VMCPU=10 00:01:44.028 SPDK_VAGRANT_VMRAM=12288 00:01:44.028 SPDK_VAGRANT_PROVIDER=libvirt 00:01:44.028 SPDK_VAGRANT_HTTP_PROXY= 00:01:44.028 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:44.028 SPDK_OPENSTACK_NETWORK=0 00:01:44.028 VAGRANT_PACKAGE_BOX=0 00:01:44.028 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:44.028 FORCE_DISTRO=true 00:01:44.028 VAGRANT_BOX_VERSION= 00:01:44.028 EXTRA_VAGRANTFILES= 00:01:44.028 NIC_MODEL=e1000 00:01:44.028 00:01:44.028 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:44.028 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:46.580 Bringing machine 'default' up with 'libvirt' provider... 00:01:46.841 ==> default: Creating image (snapshot of base box volume). 00:01:47.101 ==> default: Creating domain with the following settings... 00:01:47.101 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731884830_063576b37643be68a9fd 00:01:47.101 ==> default: -- Domain type: kvm 00:01:47.101 ==> default: -- Cpus: 10 00:01:47.101 ==> default: -- Feature: acpi 00:01:47.101 ==> default: -- Feature: apic 00:01:47.101 ==> default: -- Feature: pae 00:01:47.101 ==> default: -- Memory: 12288M 00:01:47.101 ==> default: -- Memory Backing: hugepages: 00:01:47.101 ==> default: -- Management MAC: 00:01:47.101 ==> default: -- Loader: 00:01:47.101 ==> default: -- Nvram: 00:01:47.101 ==> default: -- Base box: spdk/fedora39 00:01:47.101 ==> default: -- Storage pool: default 00:01:47.101 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731884830_063576b37643be68a9fd.img (20G) 00:01:47.102 ==> default: -- Volume Cache: default 00:01:47.102 ==> default: -- Kernel: 00:01:47.102 ==> default: -- Initrd: 00:01:47.102 ==> default: -- Graphics Type: vnc 00:01:47.102 ==> default: -- Graphics Port: -1 00:01:47.102 ==> default: -- Graphics IP: 127.0.0.1 00:01:47.102 ==> default: -- Graphics Password: Not defined 00:01:47.102 ==> default: -- Video Type: cirrus 00:01:47.102 ==> default: -- Video VRAM: 9216 00:01:47.102 ==> default: -- Sound Type: 00:01:47.102 ==> default: -- Keymap: en-us 00:01:47.102 ==> default: -- TPM Path: 00:01:47.102 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:47.102 ==> default: -- Command line args: 00:01:47.102 ==> default: -> value=-device, 00:01:47.102 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:47.102 ==> default: -> value=-drive, 00:01:47.102 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:47.102 ==> default: -> value=-device, 00:01:47.102 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:47.102 ==> default: -> value=-device, 00:01:47.102 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:47.102 ==> default: -> value=-drive, 00:01:47.102 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-1-drive0, 00:01:47.102 ==> default: -> value=-device, 00:01:47.102 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:47.102 ==> default: -> value=-device, 00:01:47.102 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:47.102 ==> default: -> value=-drive, 00:01:47.102 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:47.102 ==> default: -> value=-device, 00:01:47.102 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:47.102 ==> default: -> value=-drive, 00:01:47.102 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:47.102 ==> default: -> value=-device, 00:01:47.102 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:47.102 ==> default: -> value=-drive, 00:01:47.102 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:47.102 ==> default: -> value=-device, 00:01:47.102 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:47.102 ==> default: -> value=-device, 00:01:47.102 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:47.102 ==> default: -> value=-device, 00:01:47.102 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:47.102 ==> default: -> value=-drive, 00:01:47.102 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:47.102 ==> default: -> value=-device, 00:01:47.102 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:47.102 ==> default: Creating shared folders metadata... 00:01:47.102 ==> default: Starting domain. 00:01:49.016 ==> default: Waiting for domain to get an IP address... 00:02:07.136 ==> default: Waiting for SSH to become available... 00:02:07.136 ==> default: Configuring and enabling network interfaces... 00:02:09.679 default: SSH address: 192.168.121.229:22 00:02:09.679 default: SSH username: vagrant 00:02:09.679 default: SSH auth method: private key 00:02:11.589 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:19.854 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:25.146 ==> default: Mounting SSHFS shared folder... 00:02:26.634 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:26.634 ==> default: Checking Mount.. 00:02:28.063 ==> default: Folder Successfully Mounted! 00:02:28.063 00:02:28.063 SUCCESS! 00:02:28.063 00:02:28.063 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:28.063 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:28.063 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:28.063 00:02:28.074 [Pipeline] } 00:02:28.088 [Pipeline] // stage 00:02:28.096 [Pipeline] dir 00:02:28.096 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:28.098 [Pipeline] { 00:02:28.109 [Pipeline] catchError 00:02:28.111 [Pipeline] { 00:02:28.125 [Pipeline] sh 00:02:28.420 + vagrant ssh-config --host vagrant 00:02:28.420 + sed -ne '/^Host/,$p' 00:02:28.420 + tee ssh_conf 00:02:31.723 Host vagrant 00:02:31.723 HostName 192.168.121.229 00:02:31.723 User vagrant 00:02:31.723 Port 22 00:02:31.723 UserKnownHostsFile /dev/null 00:02:31.723 StrictHostKeyChecking no 00:02:31.723 PasswordAuthentication no 00:02:31.723 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:31.723 IdentitiesOnly yes 00:02:31.723 LogLevel FATAL 00:02:31.723 ForwardAgent yes 00:02:31.723 ForwardX11 yes 00:02:31.723 00:02:31.737 [Pipeline] withEnv 00:02:31.740 [Pipeline] { 00:02:31.752 [Pipeline] sh 00:02:32.038 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:32.038 source /etc/os-release 00:02:32.038 [[ -e /image.version ]] && img=$(< /image.version) 00:02:32.038 # Minimal, systemd-like check. 00:02:32.038 if [[ -e /.dockerenv ]]; then 00:02:32.038 # Clear garbage from the node'\''s name: 00:02:32.038 # agt-er_autotest_547-896 -> autotest_547-896 00:02:32.038 # $HOSTNAME is the actual container id 00:02:32.038 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:32.038 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:32.038 # We can assume this is a mount from a host where container is running, 00:02:32.038 # so fetch its hostname to easily identify the target swarm worker. 00:02:32.038 container="$(< /etc/hostname) ($agent)" 00:02:32.038 else 00:02:32.038 # Fallback 00:02:32.038 container=$agent 00:02:32.038 fi 00:02:32.038 fi 00:02:32.038 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:32.038 ' 00:02:32.309 [Pipeline] } 00:02:32.324 [Pipeline] // withEnv 00:02:32.332 [Pipeline] setCustomBuildProperty 00:02:32.346 [Pipeline] stage 00:02:32.348 [Pipeline] { (Tests) 00:02:32.363 [Pipeline] sh 00:02:32.648 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:32.923 [Pipeline] sh 00:02:33.207 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:33.484 [Pipeline] timeout 00:02:33.485 Timeout set to expire in 50 min 00:02:33.486 [Pipeline] { 00:02:33.499 [Pipeline] sh 00:02:33.782 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:34.354 HEAD is now at 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:02:34.367 [Pipeline] sh 00:02:34.695 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:34.727 [Pipeline] sh 00:02:35.013 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:35.294 [Pipeline] sh 00:02:35.577 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:35.839 ++ readlink -f spdk_repo 00:02:35.839 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:35.839 + [[ -n /home/vagrant/spdk_repo ]] 00:02:35.839 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:35.839 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:35.839 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:35.839 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:35.839 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:35.839 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:35.839 + cd /home/vagrant/spdk_repo 00:02:35.839 + source /etc/os-release 00:02:35.839 ++ NAME='Fedora Linux' 00:02:35.839 ++ VERSION='39 (Cloud Edition)' 00:02:35.839 ++ ID=fedora 00:02:35.839 ++ VERSION_ID=39 00:02:35.839 ++ VERSION_CODENAME= 00:02:35.839 ++ PLATFORM_ID=platform:f39 00:02:35.839 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:35.839 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:35.839 ++ LOGO=fedora-logo-icon 00:02:35.839 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:35.839 ++ HOME_URL=https://fedoraproject.org/ 00:02:35.839 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:35.839 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:35.839 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:35.839 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:35.839 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:35.839 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:35.839 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:35.839 ++ SUPPORT_END=2024-11-12 00:02:35.839 ++ VARIANT='Cloud Edition' 00:02:35.839 ++ VARIANT_ID=cloud 00:02:35.839 + uname -a 00:02:35.839 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:35.839 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:36.101 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:36.362 Hugepages 00:02:36.362 node hugesize free / total 00:02:36.362 node0 1048576kB 0 / 0 00:02:36.362 node0 2048kB 0 / 0 00:02:36.362 00:02:36.362 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:36.362 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:36.623 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:36.623 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:36.623 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:36.623 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:02:36.623 + rm -f /tmp/spdk-ld-path 00:02:36.623 + source autorun-spdk.conf 00:02:36.623 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:36.623 ++ SPDK_TEST_NVME=1 00:02:36.623 ++ SPDK_TEST_FTL=1 00:02:36.623 ++ SPDK_TEST_ISAL=1 00:02:36.623 ++ SPDK_RUN_ASAN=1 00:02:36.623 ++ SPDK_RUN_UBSAN=1 00:02:36.623 ++ SPDK_TEST_XNVME=1 00:02:36.623 ++ SPDK_TEST_NVME_FDP=1 00:02:36.623 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:36.623 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:36.623 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:36.623 ++ RUN_NIGHTLY=1 00:02:36.623 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:36.623 + [[ -n '' ]] 00:02:36.623 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:36.623 + for M in /var/spdk/build-*-manifest.txt 00:02:36.623 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:36.623 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:36.623 + for M in /var/spdk/build-*-manifest.txt 00:02:36.623 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:36.623 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:36.623 + for M in /var/spdk/build-*-manifest.txt 00:02:36.623 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:36.623 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:36.623 ++ uname 00:02:36.623 + [[ Linux == \L\i\n\u\x ]] 00:02:36.623 + sudo dmesg -T 00:02:36.623 + sudo dmesg --clear 00:02:36.623 + dmesg_pid=5763 00:02:36.623 + [[ Fedora Linux == FreeBSD ]] 00:02:36.623 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:36.623 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:36.623 + sudo dmesg -Tw 00:02:36.623 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:36.623 + [[ -x /usr/src/fio-static/fio ]] 00:02:36.623 + export FIO_BIN=/usr/src/fio-static/fio 00:02:36.623 + FIO_BIN=/usr/src/fio-static/fio 00:02:36.623 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:36.623 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:36.623 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:36.623 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:36.623 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:36.623 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:36.623 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:36.623 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:36.623 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:36.899 23:08:00 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:36.899 23:08:00 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:36.899 23:08:00 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:36.899 23:08:00 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:36.899 23:08:00 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:36.899 23:08:00 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:36.899 23:08:00 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:36.899 23:08:00 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:36.899 23:08:00 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:36.899 23:08:00 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:36.899 23:08:00 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:36.899 23:08:00 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:36.899 23:08:00 -- spdk_repo/autorun-spdk.conf@11 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:36.899 23:08:00 -- spdk_repo/autorun-spdk.conf@12 -- $ RUN_NIGHTLY=1 00:02:36.899 23:08:00 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:36.900 23:08:00 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:36.900 23:08:00 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:36.900 23:08:00 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:36.900 23:08:00 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:36.900 23:08:00 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:36.900 23:08:00 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:36.900 23:08:00 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:36.900 23:08:00 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:36.900 23:08:00 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:36.900 23:08:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:36.900 23:08:00 -- paths/export.sh@5 -- $ export PATH 00:02:36.900 23:08:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:36.900 23:08:00 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:36.900 23:08:00 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:36.900 23:08:00 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731884880.XXXXXX 00:02:36.900 23:08:00 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731884880.VJFLTG 00:02:36.900 23:08:00 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:36.900 23:08:00 -- common/autobuild_common.sh@492 -- $ '[' -n v22.11.4 ']' 00:02:36.900 23:08:00 -- common/autobuild_common.sh@493 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:36.900 23:08:00 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:36.900 23:08:00 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:36.900 23:08:00 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:36.900 23:08:00 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:36.900 23:08:00 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:36.900 23:08:00 -- common/autotest_common.sh@10 -- $ set +x 00:02:36.900 23:08:00 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:36.900 23:08:00 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:36.900 23:08:00 -- pm/common@17 -- $ local monitor 00:02:36.900 23:08:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:36.900 23:08:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:36.900 23:08:00 -- pm/common@25 -- $ sleep 1 00:02:36.900 23:08:00 -- pm/common@21 -- $ date +%s 00:02:36.900 23:08:00 -- pm/common@21 -- $ date +%s 00:02:36.900 23:08:00 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731884880 00:02:36.900 23:08:00 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731884880 00:02:36.900 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731884880_collect-vmstat.pm.log 00:02:36.900 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731884880_collect-cpu-load.pm.log 00:02:37.844 23:08:01 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:37.844 23:08:01 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:37.844 23:08:01 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:37.844 23:08:01 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:37.844 23:08:01 -- spdk/autobuild.sh@16 -- $ date -u 00:02:37.844 Sun Nov 17 11:08:01 PM UTC 2024 00:02:37.844 23:08:01 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:37.844 v25.01-pre-189-g83e8405e4 00:02:37.844 23:08:01 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:37.844 23:08:01 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:37.844 23:08:01 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:37.844 23:08:01 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:37.844 23:08:01 -- common/autotest_common.sh@10 -- $ set +x 00:02:37.844 ************************************ 00:02:37.844 START TEST asan 00:02:37.844 ************************************ 00:02:37.844 using asan 00:02:37.844 23:08:01 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:37.844 00:02:37.844 real 0m0.000s 00:02:37.844 user 0m0.000s 00:02:37.844 sys 0m0.000s 00:02:37.844 23:08:01 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:37.844 23:08:01 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:37.844 ************************************ 00:02:37.844 END TEST asan 00:02:37.844 ************************************ 00:02:37.844 23:08:01 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:37.844 23:08:01 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:37.844 23:08:01 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:37.844 23:08:01 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:37.844 23:08:01 -- common/autotest_common.sh@10 -- $ set +x 00:02:38.105 ************************************ 00:02:38.105 START TEST ubsan 00:02:38.105 ************************************ 00:02:38.105 using ubsan 00:02:38.105 23:08:01 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:38.105 00:02:38.105 real 0m0.000s 00:02:38.105 user 0m0.000s 00:02:38.105 sys 0m0.000s 00:02:38.105 23:08:01 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:38.105 23:08:01 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:38.105 ************************************ 00:02:38.105 END TEST ubsan 00:02:38.105 ************************************ 00:02:38.105 23:08:01 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:38.105 23:08:01 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:38.105 23:08:01 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:38.105 23:08:01 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:38.105 23:08:01 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:38.105 23:08:01 -- common/autotest_common.sh@10 -- $ set +x 00:02:38.105 ************************************ 00:02:38.105 START TEST build_native_dpdk 00:02:38.105 ************************************ 00:02:38.105 23:08:01 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:38.105 23:08:01 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:38.105 23:08:01 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:38.105 23:08:01 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:38.105 23:08:01 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:38.105 23:08:01 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:38.105 23:08:01 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:38.105 23:08:01 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:38.105 23:08:01 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:38.105 23:08:01 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:38.105 23:08:01 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:38.105 23:08:01 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:38.105 23:08:01 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:38.105 23:08:01 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:38.105 23:08:01 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:38.105 23:08:01 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:38.105 23:08:01 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:38.105 23:08:01 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:38.105 23:08:01 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:38.105 23:08:01 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:38.105 23:08:01 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:38.105 caf0f5d395 version: 22.11.4 00:02:38.105 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:38.105 dc9c799c7d vhost: fix missing spinlock unlock 00:02:38.105 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:38.105 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:38.105 23:08:01 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:38.105 23:08:01 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:38.105 23:08:01 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:38.105 23:08:01 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:38.105 23:08:01 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:38.105 23:08:01 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:38.105 23:08:01 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:38.105 23:08:01 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:38.105 23:08:01 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:38.106 23:08:01 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:38.106 23:08:01 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:38.106 23:08:01 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:38.106 23:08:01 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:38.106 23:08:01 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:38.106 23:08:01 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:38.106 23:08:01 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:38.106 23:08:01 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:38.106 23:08:01 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:38.106 23:08:01 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:38.106 patching file config/rte_config.h 00:02:38.106 Hunk #1 succeeded at 60 (offset 1 line). 00:02:38.106 23:08:01 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:38.106 23:08:01 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:38.106 patching file lib/pcapng/rte_pcapng.c 00:02:38.106 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:38.106 23:08:01 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 22.11.4 24.07.0 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:38.106 23:08:01 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:38.106 23:08:01 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:38.106 23:08:01 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:38.106 23:08:01 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:38.106 23:08:01 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:38.106 23:08:01 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:43.398 The Meson build system 00:02:43.398 Version: 1.5.0 00:02:43.398 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:43.398 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:43.398 Build type: native build 00:02:43.398 Program cat found: YES (/usr/bin/cat) 00:02:43.398 Project name: DPDK 00:02:43.398 Project version: 22.11.4 00:02:43.398 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:43.398 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:43.398 Host machine cpu family: x86_64 00:02:43.398 Host machine cpu: x86_64 00:02:43.398 Message: ## Building in Developer Mode ## 00:02:43.398 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:43.398 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:43.398 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:43.398 Program objdump found: YES (/usr/bin/objdump) 00:02:43.398 Program python3 found: YES (/usr/bin/python3) 00:02:43.398 Program cat found: YES (/usr/bin/cat) 00:02:43.398 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:43.398 Checking for size of "void *" : 8 00:02:43.398 Checking for size of "void *" : 8 (cached) 00:02:43.398 Library m found: YES 00:02:43.398 Library numa found: YES 00:02:43.398 Has header "numaif.h" : YES 00:02:43.398 Library fdt found: NO 00:02:43.398 Library execinfo found: NO 00:02:43.398 Has header "execinfo.h" : YES 00:02:43.398 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:43.398 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:43.398 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:43.398 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:43.398 Run-time dependency openssl found: YES 3.1.1 00:02:43.398 Run-time dependency libpcap found: YES 1.10.4 00:02:43.398 Has header "pcap.h" with dependency libpcap: YES 00:02:43.398 Compiler for C supports arguments -Wcast-qual: YES 00:02:43.398 Compiler for C supports arguments -Wdeprecated: YES 00:02:43.398 Compiler for C supports arguments -Wformat: YES 00:02:43.398 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:43.398 Compiler for C supports arguments -Wformat-security: NO 00:02:43.398 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:43.398 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:43.398 Compiler for C supports arguments -Wnested-externs: YES 00:02:43.398 Compiler for C supports arguments -Wold-style-definition: YES 00:02:43.398 Compiler for C supports arguments -Wpointer-arith: YES 00:02:43.398 Compiler for C supports arguments -Wsign-compare: YES 00:02:43.398 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:43.398 Compiler for C supports arguments -Wundef: YES 00:02:43.398 Compiler for C supports arguments -Wwrite-strings: YES 00:02:43.398 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:43.398 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:43.398 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:43.398 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:43.398 Compiler for C supports arguments -mavx512f: YES 00:02:43.398 Checking if "AVX512 checking" compiles: YES 00:02:43.398 Fetching value of define "__SSE4_2__" : 1 00:02:43.398 Fetching value of define "__AES__" : 1 00:02:43.398 Fetching value of define "__AVX__" : 1 00:02:43.398 Fetching value of define "__AVX2__" : 1 00:02:43.398 Fetching value of define "__AVX512BW__" : 1 00:02:43.398 Fetching value of define "__AVX512CD__" : 1 00:02:43.398 Fetching value of define "__AVX512DQ__" : 1 00:02:43.398 Fetching value of define "__AVX512F__" : 1 00:02:43.398 Fetching value of define "__AVX512VL__" : 1 00:02:43.398 Fetching value of define "__PCLMUL__" : 1 00:02:43.398 Fetching value of define "__RDRND__" : 1 00:02:43.398 Fetching value of define "__RDSEED__" : 1 00:02:43.398 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:43.398 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:43.398 Message: lib/kvargs: Defining dependency "kvargs" 00:02:43.398 Message: lib/telemetry: Defining dependency "telemetry" 00:02:43.398 Checking for function "getentropy" : YES 00:02:43.398 Message: lib/eal: Defining dependency "eal" 00:02:43.398 Message: lib/ring: Defining dependency "ring" 00:02:43.398 Message: lib/rcu: Defining dependency "rcu" 00:02:43.398 Message: lib/mempool: Defining dependency "mempool" 00:02:43.398 Message: lib/mbuf: Defining dependency "mbuf" 00:02:43.398 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:43.398 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:43.398 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:43.398 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:43.398 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:43.398 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:43.398 Compiler for C supports arguments -mpclmul: YES 00:02:43.398 Compiler for C supports arguments -maes: YES 00:02:43.398 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:43.398 Compiler for C supports arguments -mavx512bw: YES 00:02:43.398 Compiler for C supports arguments -mavx512dq: YES 00:02:43.398 Compiler for C supports arguments -mavx512vl: YES 00:02:43.398 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:43.398 Compiler for C supports arguments -mavx2: YES 00:02:43.398 Compiler for C supports arguments -mavx: YES 00:02:43.398 Message: lib/net: Defining dependency "net" 00:02:43.398 Message: lib/meter: Defining dependency "meter" 00:02:43.398 Message: lib/ethdev: Defining dependency "ethdev" 00:02:43.398 Message: lib/pci: Defining dependency "pci" 00:02:43.398 Message: lib/cmdline: Defining dependency "cmdline" 00:02:43.398 Message: lib/metrics: Defining dependency "metrics" 00:02:43.398 Message: lib/hash: Defining dependency "hash" 00:02:43.398 Message: lib/timer: Defining dependency "timer" 00:02:43.398 Fetching value of define "__AVX2__" : 1 (cached) 00:02:43.398 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:43.398 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:43.398 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:43.398 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:43.398 Message: lib/acl: Defining dependency "acl" 00:02:43.398 Message: lib/bbdev: Defining dependency "bbdev" 00:02:43.398 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:43.398 Run-time dependency libelf found: YES 0.191 00:02:43.398 Message: lib/bpf: Defining dependency "bpf" 00:02:43.398 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:43.398 Message: lib/compressdev: Defining dependency "compressdev" 00:02:43.398 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:43.398 Message: lib/distributor: Defining dependency "distributor" 00:02:43.398 Message: lib/efd: Defining dependency "efd" 00:02:43.398 Message: lib/eventdev: Defining dependency "eventdev" 00:02:43.398 Message: lib/gpudev: Defining dependency "gpudev" 00:02:43.398 Message: lib/gro: Defining dependency "gro" 00:02:43.398 Message: lib/gso: Defining dependency "gso" 00:02:43.398 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:43.398 Message: lib/jobstats: Defining dependency "jobstats" 00:02:43.398 Message: lib/latencystats: Defining dependency "latencystats" 00:02:43.398 Message: lib/lpm: Defining dependency "lpm" 00:02:43.398 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:43.398 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:43.398 Fetching value of define "__AVX512IFMA__" : 1 00:02:43.398 Message: lib/member: Defining dependency "member" 00:02:43.398 Message: lib/pcapng: Defining dependency "pcapng" 00:02:43.398 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:43.398 Message: lib/power: Defining dependency "power" 00:02:43.398 Message: lib/rawdev: Defining dependency "rawdev" 00:02:43.398 Message: lib/regexdev: Defining dependency "regexdev" 00:02:43.398 Message: lib/dmadev: Defining dependency "dmadev" 00:02:43.398 Message: lib/rib: Defining dependency "rib" 00:02:43.398 Message: lib/reorder: Defining dependency "reorder" 00:02:43.398 Message: lib/sched: Defining dependency "sched" 00:02:43.398 Message: lib/security: Defining dependency "security" 00:02:43.398 Message: lib/stack: Defining dependency "stack" 00:02:43.398 Has header "linux/userfaultfd.h" : YES 00:02:43.398 Message: lib/vhost: Defining dependency "vhost" 00:02:43.398 Message: lib/ipsec: Defining dependency "ipsec" 00:02:43.398 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:43.398 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:43.398 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:43.398 Message: lib/fib: Defining dependency "fib" 00:02:43.398 Message: lib/port: Defining dependency "port" 00:02:43.398 Message: lib/pdump: Defining dependency "pdump" 00:02:43.398 Message: lib/table: Defining dependency "table" 00:02:43.398 Message: lib/pipeline: Defining dependency "pipeline" 00:02:43.398 Message: lib/graph: Defining dependency "graph" 00:02:43.398 Message: lib/node: Defining dependency "node" 00:02:43.398 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:43.398 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:43.398 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:43.398 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:43.398 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:43.398 Compiler for C supports arguments -Wno-unused-value: YES 00:02:43.398 Compiler for C supports arguments -Wno-format: YES 00:02:43.398 Compiler for C supports arguments -Wno-format-security: YES 00:02:43.398 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:43.398 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:43.398 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:43.398 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:43.975 Fetching value of define "__AVX2__" : 1 (cached) 00:02:43.975 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:43.975 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:43.975 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:43.975 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:43.975 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:43.975 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:43.975 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:43.975 Configuring doxy-api.conf using configuration 00:02:43.975 Program sphinx-build found: NO 00:02:43.975 Configuring rte_build_config.h using configuration 00:02:43.975 Message: 00:02:43.975 ================= 00:02:43.975 Applications Enabled 00:02:43.975 ================= 00:02:43.975 00:02:43.975 apps: 00:02:43.975 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:43.975 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:43.975 test-security-perf, 00:02:43.975 00:02:43.975 Message: 00:02:43.975 ================= 00:02:43.975 Libraries Enabled 00:02:43.975 ================= 00:02:43.975 00:02:43.975 libs: 00:02:43.975 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:43.975 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:43.975 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:43.975 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:43.975 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:43.975 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:43.975 table, pipeline, graph, node, 00:02:43.975 00:02:43.975 Message: 00:02:43.975 =============== 00:02:43.975 Drivers Enabled 00:02:43.975 =============== 00:02:43.975 00:02:43.975 common: 00:02:43.975 00:02:43.975 bus: 00:02:43.975 pci, vdev, 00:02:43.975 mempool: 00:02:43.975 ring, 00:02:43.975 dma: 00:02:43.975 00:02:43.975 net: 00:02:43.975 i40e, 00:02:43.975 raw: 00:02:43.975 00:02:43.975 crypto: 00:02:43.975 00:02:43.975 compress: 00:02:43.975 00:02:43.975 regex: 00:02:43.975 00:02:43.975 vdpa: 00:02:43.975 00:02:43.975 event: 00:02:43.975 00:02:43.975 baseband: 00:02:43.975 00:02:43.975 gpu: 00:02:43.975 00:02:43.975 00:02:43.975 Message: 00:02:43.975 ================= 00:02:43.975 Content Skipped 00:02:43.975 ================= 00:02:43.975 00:02:43.975 apps: 00:02:43.975 00:02:43.975 libs: 00:02:43.975 kni: explicitly disabled via build config (deprecated lib) 00:02:43.975 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:43.975 00:02:43.975 drivers: 00:02:43.975 common/cpt: not in enabled drivers build config 00:02:43.975 common/dpaax: not in enabled drivers build config 00:02:43.975 common/iavf: not in enabled drivers build config 00:02:43.975 common/idpf: not in enabled drivers build config 00:02:43.975 common/mvep: not in enabled drivers build config 00:02:43.975 common/octeontx: not in enabled drivers build config 00:02:43.975 bus/auxiliary: not in enabled drivers build config 00:02:43.975 bus/dpaa: not in enabled drivers build config 00:02:43.975 bus/fslmc: not in enabled drivers build config 00:02:43.975 bus/ifpga: not in enabled drivers build config 00:02:43.975 bus/vmbus: not in enabled drivers build config 00:02:43.975 common/cnxk: not in enabled drivers build config 00:02:43.975 common/mlx5: not in enabled drivers build config 00:02:43.975 common/qat: not in enabled drivers build config 00:02:43.975 common/sfc_efx: not in enabled drivers build config 00:02:43.975 mempool/bucket: not in enabled drivers build config 00:02:43.975 mempool/cnxk: not in enabled drivers build config 00:02:43.975 mempool/dpaa: not in enabled drivers build config 00:02:43.975 mempool/dpaa2: not in enabled drivers build config 00:02:43.975 mempool/octeontx: not in enabled drivers build config 00:02:43.975 mempool/stack: not in enabled drivers build config 00:02:43.975 dma/cnxk: not in enabled drivers build config 00:02:43.975 dma/dpaa: not in enabled drivers build config 00:02:43.975 dma/dpaa2: not in enabled drivers build config 00:02:43.975 dma/hisilicon: not in enabled drivers build config 00:02:43.975 dma/idxd: not in enabled drivers build config 00:02:43.975 dma/ioat: not in enabled drivers build config 00:02:43.975 dma/skeleton: not in enabled drivers build config 00:02:43.975 net/af_packet: not in enabled drivers build config 00:02:43.975 net/af_xdp: not in enabled drivers build config 00:02:43.975 net/ark: not in enabled drivers build config 00:02:43.975 net/atlantic: not in enabled drivers build config 00:02:43.975 net/avp: not in enabled drivers build config 00:02:43.975 net/axgbe: not in enabled drivers build config 00:02:43.975 net/bnx2x: not in enabled drivers build config 00:02:43.975 net/bnxt: not in enabled drivers build config 00:02:43.975 net/bonding: not in enabled drivers build config 00:02:43.975 net/cnxk: not in enabled drivers build config 00:02:43.975 net/cxgbe: not in enabled drivers build config 00:02:43.975 net/dpaa: not in enabled drivers build config 00:02:43.975 net/dpaa2: not in enabled drivers build config 00:02:43.975 net/e1000: not in enabled drivers build config 00:02:43.975 net/ena: not in enabled drivers build config 00:02:43.975 net/enetc: not in enabled drivers build config 00:02:43.975 net/enetfec: not in enabled drivers build config 00:02:43.975 net/enic: not in enabled drivers build config 00:02:43.976 net/failsafe: not in enabled drivers build config 00:02:43.976 net/fm10k: not in enabled drivers build config 00:02:43.976 net/gve: not in enabled drivers build config 00:02:43.976 net/hinic: not in enabled drivers build config 00:02:43.976 net/hns3: not in enabled drivers build config 00:02:43.976 net/iavf: not in enabled drivers build config 00:02:43.976 net/ice: not in enabled drivers build config 00:02:43.976 net/idpf: not in enabled drivers build config 00:02:43.976 net/igc: not in enabled drivers build config 00:02:43.976 net/ionic: not in enabled drivers build config 00:02:43.976 net/ipn3ke: not in enabled drivers build config 00:02:43.976 net/ixgbe: not in enabled drivers build config 00:02:43.976 net/kni: not in enabled drivers build config 00:02:43.976 net/liquidio: not in enabled drivers build config 00:02:43.976 net/mana: not in enabled drivers build config 00:02:43.976 net/memif: not in enabled drivers build config 00:02:43.976 net/mlx4: not in enabled drivers build config 00:02:43.976 net/mlx5: not in enabled drivers build config 00:02:43.976 net/mvneta: not in enabled drivers build config 00:02:43.976 net/mvpp2: not in enabled drivers build config 00:02:43.976 net/netvsc: not in enabled drivers build config 00:02:43.976 net/nfb: not in enabled drivers build config 00:02:43.976 net/nfp: not in enabled drivers build config 00:02:43.976 net/ngbe: not in enabled drivers build config 00:02:43.976 net/null: not in enabled drivers build config 00:02:43.976 net/octeontx: not in enabled drivers build config 00:02:43.976 net/octeon_ep: not in enabled drivers build config 00:02:43.976 net/pcap: not in enabled drivers build config 00:02:43.976 net/pfe: not in enabled drivers build config 00:02:43.976 net/qede: not in enabled drivers build config 00:02:43.976 net/ring: not in enabled drivers build config 00:02:43.976 net/sfc: not in enabled drivers build config 00:02:43.976 net/softnic: not in enabled drivers build config 00:02:43.976 net/tap: not in enabled drivers build config 00:02:43.976 net/thunderx: not in enabled drivers build config 00:02:43.976 net/txgbe: not in enabled drivers build config 00:02:43.976 net/vdev_netvsc: not in enabled drivers build config 00:02:43.976 net/vhost: not in enabled drivers build config 00:02:43.976 net/virtio: not in enabled drivers build config 00:02:43.976 net/vmxnet3: not in enabled drivers build config 00:02:43.976 raw/cnxk_bphy: not in enabled drivers build config 00:02:43.976 raw/cnxk_gpio: not in enabled drivers build config 00:02:43.976 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:43.976 raw/ifpga: not in enabled drivers build config 00:02:43.976 raw/ntb: not in enabled drivers build config 00:02:43.976 raw/skeleton: not in enabled drivers build config 00:02:43.976 crypto/armv8: not in enabled drivers build config 00:02:43.976 crypto/bcmfs: not in enabled drivers build config 00:02:43.976 crypto/caam_jr: not in enabled drivers build config 00:02:43.976 crypto/ccp: not in enabled drivers build config 00:02:43.976 crypto/cnxk: not in enabled drivers build config 00:02:43.976 crypto/dpaa_sec: not in enabled drivers build config 00:02:43.976 crypto/dpaa2_sec: not in enabled drivers build config 00:02:43.976 crypto/ipsec_mb: not in enabled drivers build config 00:02:43.976 crypto/mlx5: not in enabled drivers build config 00:02:43.976 crypto/mvsam: not in enabled drivers build config 00:02:43.976 crypto/nitrox: not in enabled drivers build config 00:02:43.976 crypto/null: not in enabled drivers build config 00:02:43.976 crypto/octeontx: not in enabled drivers build config 00:02:43.976 crypto/openssl: not in enabled drivers build config 00:02:43.976 crypto/scheduler: not in enabled drivers build config 00:02:43.976 crypto/uadk: not in enabled drivers build config 00:02:43.976 crypto/virtio: not in enabled drivers build config 00:02:43.976 compress/isal: not in enabled drivers build config 00:02:43.976 compress/mlx5: not in enabled drivers build config 00:02:43.976 compress/octeontx: not in enabled drivers build config 00:02:43.976 compress/zlib: not in enabled drivers build config 00:02:43.976 regex/mlx5: not in enabled drivers build config 00:02:43.976 regex/cn9k: not in enabled drivers build config 00:02:43.976 vdpa/ifc: not in enabled drivers build config 00:02:43.976 vdpa/mlx5: not in enabled drivers build config 00:02:43.976 vdpa/sfc: not in enabled drivers build config 00:02:43.976 event/cnxk: not in enabled drivers build config 00:02:43.976 event/dlb2: not in enabled drivers build config 00:02:43.976 event/dpaa: not in enabled drivers build config 00:02:43.976 event/dpaa2: not in enabled drivers build config 00:02:43.976 event/dsw: not in enabled drivers build config 00:02:43.976 event/opdl: not in enabled drivers build config 00:02:43.976 event/skeleton: not in enabled drivers build config 00:02:43.976 event/sw: not in enabled drivers build config 00:02:43.976 event/octeontx: not in enabled drivers build config 00:02:43.976 baseband/acc: not in enabled drivers build config 00:02:43.976 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:43.976 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:43.976 baseband/la12xx: not in enabled drivers build config 00:02:43.976 baseband/null: not in enabled drivers build config 00:02:43.976 baseband/turbo_sw: not in enabled drivers build config 00:02:43.976 gpu/cuda: not in enabled drivers build config 00:02:43.976 00:02:43.976 00:02:43.976 Build targets in project: 309 00:02:43.976 00:02:43.976 DPDK 22.11.4 00:02:43.976 00:02:43.976 User defined options 00:02:43.976 libdir : lib 00:02:43.976 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:43.976 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:43.976 c_link_args : 00:02:43.976 enable_docs : false 00:02:43.976 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:43.976 enable_kmods : false 00:02:43.976 machine : native 00:02:43.976 tests : false 00:02:43.976 00:02:43.976 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:43.976 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:43.976 23:08:07 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:43.976 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:43.976 [1/738] Generating lib/rte_telemetry_mingw with a custom command 00:02:43.976 [2/738] Generating lib/rte_kvargs_def with a custom command 00:02:43.976 [3/738] Generating lib/rte_kvargs_mingw with a custom command 00:02:43.976 [4/738] Generating lib/rte_telemetry_def with a custom command 00:02:44.237 [5/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:44.237 [6/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:44.237 [7/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:44.237 [8/738] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:44.237 [9/738] Linking static target lib/librte_kvargs.a 00:02:44.237 [10/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:44.237 [11/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:44.237 [12/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:44.237 [13/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:44.237 [14/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:44.237 [15/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:44.237 [16/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:44.237 [17/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:44.499 [18/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:44.499 [19/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:44.499 [20/738] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.499 [21/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:44.499 [22/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:44.499 [23/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:44.499 [24/738] Linking target lib/librte_kvargs.so.23.0 00:02:44.499 [25/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:44.499 [26/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:44.499 [27/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:44.499 [28/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:44.499 [29/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:44.499 [30/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:44.499 [31/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:44.499 [32/738] Linking static target lib/librte_telemetry.a 00:02:44.499 [33/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:44.764 [34/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:44.764 [35/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:44.764 [36/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:44.764 [37/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:44.764 [38/738] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:44.764 [39/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:44.764 [40/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:44.764 [41/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:45.024 [42/738] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.024 [43/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:45.024 [44/738] Linking target lib/librte_telemetry.so.23.0 00:02:45.024 [45/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:45.024 [46/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:45.024 [47/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:45.024 [48/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:45.024 [49/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:45.024 [50/738] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:45.024 [51/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:45.024 [52/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:45.024 [53/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:45.286 [54/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:45.286 [55/738] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:45.286 [56/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:45.286 [57/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:45.286 [58/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:45.286 [59/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:45.286 [60/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:45.286 [61/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:45.286 [62/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:45.286 [63/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:45.286 [64/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:45.286 [65/738] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:45.286 [66/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:45.286 [67/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:45.286 [68/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:45.286 [69/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:45.286 [70/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:45.286 [71/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:45.547 [72/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:45.547 [73/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:45.547 [74/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:45.547 [75/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:45.547 [76/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:45.547 [77/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:45.547 [78/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:45.547 [79/738] Generating lib/rte_eal_def with a custom command 00:02:45.547 [80/738] Generating lib/rte_eal_mingw with a custom command 00:02:45.547 [81/738] Generating lib/rte_ring_def with a custom command 00:02:45.547 [82/738] Generating lib/rte_ring_mingw with a custom command 00:02:45.547 [83/738] Generating lib/rte_rcu_def with a custom command 00:02:45.547 [84/738] Generating lib/rte_rcu_mingw with a custom command 00:02:45.547 [85/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:45.547 [86/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:45.547 [87/738] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:45.547 [88/738] Linking static target lib/librte_ring.a 00:02:45.805 [89/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:45.805 [90/738] Generating lib/rte_mempool_def with a custom command 00:02:45.805 [91/738] Generating lib/rte_mempool_mingw with a custom command 00:02:45.805 [92/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:45.805 [93/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:45.805 [94/738] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.805 [95/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:46.063 [96/738] Generating lib/rte_mbuf_def with a custom command 00:02:46.063 [97/738] Generating lib/rte_mbuf_mingw with a custom command 00:02:46.063 [98/738] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:46.063 [99/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:46.063 [100/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:46.063 [101/738] Linking static target lib/librte_eal.a 00:02:46.063 [102/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:46.063 [103/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:46.323 [104/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:46.323 [105/738] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:46.323 [106/738] Linking static target lib/librte_rcu.a 00:02:46.323 [107/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:46.323 [108/738] Linking static target lib/librte_mempool.a 00:02:46.323 [109/738] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:46.323 [110/738] Generating lib/rte_net_def with a custom command 00:02:46.323 [111/738] Generating lib/rte_net_mingw with a custom command 00:02:46.323 [112/738] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:46.323 [113/738] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:46.323 [114/738] Generating lib/rte_meter_def with a custom command 00:02:46.323 [115/738] Generating lib/rte_meter_mingw with a custom command 00:02:46.323 [116/738] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:46.582 [117/738] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:46.582 [118/738] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.582 [119/738] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:46.582 [120/738] Linking static target lib/librte_meter.a 00:02:46.582 [121/738] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:46.582 [122/738] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.582 [123/738] Linking static target lib/librte_net.a 00:02:46.840 [124/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:46.840 [125/738] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.840 [126/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:46.840 [127/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:46.840 [128/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:46.840 [129/738] Linking static target lib/librte_mbuf.a 00:02:46.840 [130/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:46.840 [131/738] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.840 [132/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:47.100 [133/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:47.100 [134/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:47.361 [135/738] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.361 [136/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:47.361 [137/738] Generating lib/rte_ethdev_def with a custom command 00:02:47.361 [138/738] Generating lib/rte_ethdev_mingw with a custom command 00:02:47.361 [139/738] Generating lib/rte_pci_def with a custom command 00:02:47.361 [140/738] Generating lib/rte_pci_mingw with a custom command 00:02:47.361 [141/738] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:47.361 [142/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:47.361 [143/738] Linking static target lib/librte_pci.a 00:02:47.361 [144/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:47.361 [145/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:47.361 [146/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:47.622 [147/738] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.622 [148/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:47.622 [149/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:47.622 [150/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:47.622 [151/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:47.622 [152/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:47.622 [153/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:47.622 [154/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:47.622 [155/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:47.622 [156/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:47.622 [157/738] Generating lib/rte_cmdline_def with a custom command 00:02:47.622 [158/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:47.622 [159/738] Generating lib/rte_cmdline_mingw with a custom command 00:02:47.622 [160/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:47.622 [161/738] Generating lib/rte_metrics_def with a custom command 00:02:47.622 [162/738] Generating lib/rte_metrics_mingw with a custom command 00:02:47.622 [163/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:47.622 [164/738] Generating lib/rte_hash_def with a custom command 00:02:47.881 [165/738] Generating lib/rte_hash_mingw with a custom command 00:02:47.881 [166/738] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:47.881 [167/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:47.881 [168/738] Generating lib/rte_timer_def with a custom command 00:02:47.881 [169/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:47.881 [170/738] Generating lib/rte_timer_mingw with a custom command 00:02:47.881 [171/738] Linking static target lib/librte_cmdline.a 00:02:47.881 [172/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:47.881 [173/738] Linking static target lib/librte_metrics.a 00:02:48.138 [174/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:48.138 [175/738] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:48.138 [176/738] Linking static target lib/librte_timer.a 00:02:48.138 [177/738] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.396 [178/738] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:48.396 [179/738] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.396 [180/738] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:48.396 [181/738] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.670 [182/738] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:48.670 [183/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:48.670 [184/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:48.670 [185/738] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:48.670 [186/738] Generating lib/rte_acl_def with a custom command 00:02:48.670 [187/738] Linking static target lib/librte_ethdev.a 00:02:48.670 [188/738] Generating lib/rte_acl_mingw with a custom command 00:02:48.670 [189/738] Generating lib/rte_bbdev_def with a custom command 00:02:48.670 [190/738] Generating lib/rte_bbdev_mingw with a custom command 00:02:48.670 [191/738] Generating lib/rte_bitratestats_def with a custom command 00:02:48.670 [192/738] Generating lib/rte_bitratestats_mingw with a custom command 00:02:48.932 [193/738] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:48.932 [194/738] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:48.932 [195/738] Linking static target lib/librte_bitratestats.a 00:02:48.932 [196/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:49.189 [197/738] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:49.189 [198/738] Linking static target lib/librte_bbdev.a 00:02:49.189 [199/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:49.189 [200/738] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.446 [201/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:49.446 [202/738] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:49.446 [203/738] Linking static target lib/librte_hash.a 00:02:49.446 [204/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:49.446 [205/738] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.703 [206/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:49.703 [207/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:49.703 [208/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:49.959 [209/738] Generating lib/rte_bpf_def with a custom command 00:02:49.959 [210/738] Generating lib/rte_bpf_mingw with a custom command 00:02:49.959 [211/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:49.960 [212/738] Generating lib/rte_cfgfile_def with a custom command 00:02:49.960 [213/738] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.960 [214/738] Generating lib/rte_cfgfile_mingw with a custom command 00:02:49.960 [215/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:49.960 [216/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:49.960 [217/738] Generating lib/rte_compressdev_def with a custom command 00:02:49.960 [218/738] Generating lib/rte_compressdev_mingw with a custom command 00:02:50.218 [219/738] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:50.218 [220/738] Linking static target lib/librte_cfgfile.a 00:02:50.218 [221/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:50.218 [222/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:50.218 [223/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:50.218 [224/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:50.218 [225/738] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.476 [226/738] Generating lib/rte_cryptodev_mingw with a custom command 00:02:50.476 [227/738] Generating lib/rte_cryptodev_def with a custom command 00:02:50.476 [228/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:50.476 [229/738] Linking static target lib/librte_compressdev.a 00:02:50.476 [230/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:50.476 [231/738] Linking static target lib/librte_bpf.a 00:02:50.733 [232/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:50.733 [233/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:50.733 [234/738] Linking static target lib/librte_acl.a 00:02:50.733 [235/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:50.733 [236/738] Generating lib/rte_distributor_def with a custom command 00:02:50.733 [237/738] Generating lib/rte_distributor_mingw with a custom command 00:02:50.733 [238/738] Generating lib/rte_efd_def with a custom command 00:02:50.733 [239/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:50.733 [240/738] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.733 [241/738] Generating lib/rte_efd_mingw with a custom command 00:02:50.733 [242/738] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.990 [243/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:50.990 [244/738] Linking static target lib/librte_distributor.a 00:02:50.990 [245/738] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.990 [246/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:50.990 [247/738] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.990 [248/738] Linking target lib/librte_eal.so.23.0 00:02:50.990 [249/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:50.990 [250/738] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.990 [251/738] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:51.247 [252/738] Linking target lib/librte_ring.so.23.0 00:02:51.247 [253/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:51.247 [254/738] Linking target lib/librte_meter.so.23.0 00:02:51.247 [255/738] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:51.247 [256/738] Linking target lib/librte_rcu.so.23.0 00:02:51.247 [257/738] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:51.247 [258/738] Linking target lib/librte_mempool.so.23.0 00:02:51.505 [259/738] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:51.505 [260/738] Linking target lib/librte_pci.so.23.0 00:02:51.505 [261/738] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:51.505 [262/738] Linking target lib/librte_mbuf.so.23.0 00:02:51.505 [263/738] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:51.505 [264/738] Linking target lib/librte_timer.so.23.0 00:02:51.505 [265/738] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:51.505 [266/738] Linking target lib/librte_net.so.23.0 00:02:51.764 [267/738] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:51.764 [268/738] Linking target lib/librte_acl.so.23.0 00:02:51.764 [269/738] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:51.764 [270/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:51.764 [271/738] Linking target lib/librte_cmdline.so.23.0 00:02:51.764 [272/738] Linking target lib/librte_hash.so.23.0 00:02:51.764 [273/738] Linking target lib/librte_bbdev.so.23.0 00:02:51.764 [274/738] Linking target lib/librte_cfgfile.so.23.0 00:02:51.764 [275/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:51.764 [276/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:51.764 [277/738] Linking target lib/librte_compressdev.so.23.0 00:02:51.764 [278/738] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:51.764 [279/738] Generating lib/rte_eventdev_def with a custom command 00:02:51.764 [280/738] Generating lib/rte_eventdev_mingw with a custom command 00:02:51.764 [281/738] Linking target lib/librte_distributor.so.23.0 00:02:51.764 [282/738] Generating lib/rte_gpudev_def with a custom command 00:02:51.764 [283/738] Generating lib/rte_gpudev_mingw with a custom command 00:02:51.764 [284/738] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:51.764 [285/738] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:51.764 [286/738] Linking static target lib/librte_efd.a 00:02:51.764 [287/738] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.764 [288/738] Linking target lib/librte_ethdev.so.23.0 00:02:52.022 [289/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:52.022 [290/738] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:52.022 [291/738] Linking target lib/librte_metrics.so.23.0 00:02:52.022 [292/738] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.022 [293/738] Linking target lib/librte_bpf.so.23.0 00:02:52.022 [294/738] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:52.022 [295/738] Linking target lib/librte_bitratestats.so.23.0 00:02:52.279 [296/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:52.279 [297/738] Linking target lib/librte_efd.so.23.0 00:02:52.279 [298/738] Linking static target lib/librte_cryptodev.a 00:02:52.279 [299/738] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:52.279 [300/738] Generating lib/rte_gro_def with a custom command 00:02:52.279 [301/738] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:52.279 [302/738] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:52.280 [303/738] Generating lib/rte_gro_mingw with a custom command 00:02:52.280 [304/738] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:52.280 [305/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:52.280 [306/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:52.280 [307/738] Linking static target lib/librte_gro.a 00:02:52.280 [308/738] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:52.280 [309/738] Linking static target lib/librte_gpudev.a 00:02:52.537 [310/738] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:52.537 [311/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:52.537 [312/738] Generating lib/rte_gso_def with a custom command 00:02:52.537 [313/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:52.537 [314/738] Generating lib/rte_gso_mingw with a custom command 00:02:52.537 [315/738] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.537 [316/738] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:52.537 [317/738] Linking target lib/librte_gro.so.23.0 00:02:52.537 [318/738] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:52.795 [319/738] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:52.795 [320/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:52.795 [321/738] Linking static target lib/librte_gso.a 00:02:52.795 [322/738] Generating lib/rte_ip_frag_def with a custom command 00:02:52.795 [323/738] Generating lib/rte_ip_frag_mingw with a custom command 00:02:52.795 [324/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:52.795 [325/738] Generating lib/rte_jobstats_def with a custom command 00:02:52.795 [326/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:52.795 [327/738] Generating lib/rte_jobstats_mingw with a custom command 00:02:52.795 [328/738] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.795 [329/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:52.795 [330/738] Linking static target lib/librte_eventdev.a 00:02:52.795 [331/738] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.795 [332/738] Generating lib/rte_latencystats_def with a custom command 00:02:52.795 [333/738] Linking target lib/librte_gpudev.so.23.0 00:02:52.795 [334/738] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:53.054 [335/738] Linking static target lib/librte_jobstats.a 00:02:53.054 [336/738] Generating lib/rte_latencystats_mingw with a custom command 00:02:53.054 [337/738] Linking target lib/librte_gso.so.23.0 00:02:53.054 [338/738] Generating lib/rte_lpm_def with a custom command 00:02:53.054 [339/738] Generating lib/rte_lpm_mingw with a custom command 00:02:53.054 [340/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:53.054 [341/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:53.054 [342/738] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.054 [343/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:53.054 [344/738] Linking static target lib/librte_ip_frag.a 00:02:53.054 [345/738] Linking target lib/librte_jobstats.so.23.0 00:02:53.321 [346/738] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:53.321 [347/738] Linking static target lib/librte_latencystats.a 00:02:53.321 [348/738] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:53.321 [349/738] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.321 [350/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:53.321 [351/738] Generating lib/rte_member_def with a custom command 00:02:53.321 [352/738] Linking target lib/librte_ip_frag.so.23.0 00:02:53.322 [353/738] Generating lib/rte_member_mingw with a custom command 00:02:53.322 [354/738] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.322 [355/738] Linking target lib/librte_latencystats.so.23.0 00:02:53.322 [356/738] Generating lib/rte_pcapng_def with a custom command 00:02:53.589 [357/738] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:53.589 [358/738] Generating lib/rte_pcapng_mingw with a custom command 00:02:53.589 [359/738] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:53.589 [360/738] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.589 [361/738] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:53.589 [362/738] Linking target lib/librte_cryptodev.so.23.0 00:02:53.589 [363/738] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:53.589 [364/738] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:53.848 [365/738] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:53.848 [366/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:02:53.848 [367/738] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:53.848 [368/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:53.848 [369/738] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:53.848 [370/738] Linking static target lib/librte_lpm.a 00:02:53.848 [371/738] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:53.848 [372/738] Generating lib/rte_power_def with a custom command 00:02:53.848 [373/738] Generating lib/rte_power_mingw with a custom command 00:02:53.848 [374/738] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:53.848 [375/738] Generating lib/rte_rawdev_def with a custom command 00:02:53.848 [376/738] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:53.848 [377/738] Generating lib/rte_rawdev_mingw with a custom command 00:02:53.848 [378/738] Generating lib/rte_regexdev_def with a custom command 00:02:54.106 [379/738] Generating lib/rte_regexdev_mingw with a custom command 00:02:54.106 [380/738] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:54.106 [381/738] Linking static target lib/librte_pcapng.a 00:02:54.106 [382/738] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.106 [383/738] Linking target lib/librte_lpm.so.23.0 00:02:54.106 [384/738] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:54.106 [385/738] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:54.106 [386/738] Linking static target lib/librte_rawdev.a 00:02:54.106 [387/738] Generating lib/rte_dmadev_def with a custom command 00:02:54.106 [388/738] Generating lib/rte_dmadev_mingw with a custom command 00:02:54.106 [389/738] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:54.106 [390/738] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:54.106 [391/738] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:54.106 [392/738] Linking static target lib/librte_power.a 00:02:54.106 [393/738] Generating lib/rte_rib_def with a custom command 00:02:54.106 [394/738] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.364 [395/738] Generating lib/rte_rib_mingw with a custom command 00:02:54.364 [396/738] Linking target lib/librte_pcapng.so.23.0 00:02:54.364 [397/738] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.364 [398/738] Linking target lib/librte_eventdev.so.23.0 00:02:54.364 [399/738] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:54.364 [400/738] Linking static target lib/librte_regexdev.a 00:02:54.364 [401/738] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:54.364 [402/738] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:54.364 [403/738] Linking static target lib/librte_dmadev.a 00:02:54.364 [404/738] Generating lib/rte_reorder_def with a custom command 00:02:54.364 [405/738] Generating lib/rte_reorder_mingw with a custom command 00:02:54.364 [406/738] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:54.364 [407/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:54.364 [408/738] Linking static target lib/librte_member.a 00:02:54.364 [409/738] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.364 [410/738] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:54.622 [411/738] Linking target lib/librte_rawdev.so.23.0 00:02:54.622 [412/738] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:54.622 [413/738] Generating lib/rte_sched_def with a custom command 00:02:54.622 [414/738] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:54.622 [415/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:54.622 [416/738] Generating lib/rte_sched_mingw with a custom command 00:02:54.622 [417/738] Generating lib/rte_security_def with a custom command 00:02:54.622 [418/738] Generating lib/rte_security_mingw with a custom command 00:02:54.622 [419/738] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.622 [420/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:54.622 [421/738] Linking target lib/librte_member.so.23.0 00:02:54.622 [422/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:54.622 [423/738] Generating lib/rte_stack_def with a custom command 00:02:54.622 [424/738] Generating lib/rte_stack_mingw with a custom command 00:02:54.622 [425/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:54.622 [426/738] Linking static target lib/librte_stack.a 00:02:54.880 [427/738] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.880 [428/738] Linking target lib/librte_dmadev.so.23.0 00:02:54.880 [429/738] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:54.880 [430/738] Linking static target lib/librte_reorder.a 00:02:54.880 [431/738] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.880 [432/738] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:54.880 [433/738] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.880 [434/738] Linking target lib/librte_power.so.23.0 00:02:54.880 [435/738] Linking target lib/librte_regexdev.so.23.0 00:02:54.880 [436/738] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.880 [437/738] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:54.880 [438/738] Linking target lib/librte_stack.so.23.0 00:02:54.880 [439/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:54.880 [440/738] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.880 [441/738] Linking static target lib/librte_rib.a 00:02:55.139 [442/738] Linking target lib/librte_reorder.so.23.0 00:02:55.139 [443/738] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:55.139 [444/738] Generating lib/rte_vhost_def with a custom command 00:02:55.139 [445/738] Generating lib/rte_vhost_mingw with a custom command 00:02:55.139 [446/738] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:55.139 [447/738] Linking static target lib/librte_security.a 00:02:55.139 [448/738] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.439 [449/738] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:55.439 [450/738] Linking target lib/librte_rib.so.23.0 00:02:55.439 [451/738] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:55.439 [452/738] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:55.439 [453/738] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.439 [454/738] Linking target lib/librte_security.so.23.0 00:02:55.696 [455/738] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:55.696 [456/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:55.696 [457/738] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:55.696 [458/738] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:55.696 [459/738] Linking static target lib/librte_sched.a 00:02:55.696 [460/738] Generating lib/rte_ipsec_def with a custom command 00:02:55.696 [461/738] Generating lib/rte_ipsec_mingw with a custom command 00:02:55.955 [462/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:55.955 [463/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:55.955 [464/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:55.955 [465/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:55.955 [466/738] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.213 [467/738] Linking target lib/librte_sched.so.23.0 00:02:56.213 [468/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:56.213 [469/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:56.213 [470/738] Generating lib/rte_fib_def with a custom command 00:02:56.213 [471/738] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:56.213 [472/738] Generating lib/rte_fib_mingw with a custom command 00:02:56.471 [473/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:56.471 [474/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:56.471 [475/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:56.471 [476/738] Linking static target lib/librte_ipsec.a 00:02:56.471 [477/738] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:56.730 [478/738] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.730 [479/738] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:56.730 [480/738] Linking target lib/librte_ipsec.so.23.0 00:02:56.730 [481/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:56.730 [482/738] Linking static target lib/librte_fib.a 00:02:56.730 [483/738] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:56.730 [484/738] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:56.730 [485/738] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:56.986 [486/738] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:56.986 [487/738] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.986 [488/738] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:56.986 [489/738] Linking target lib/librte_fib.so.23.0 00:02:57.243 [490/738] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:57.243 [491/738] Generating lib/rte_port_def with a custom command 00:02:57.243 [492/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:57.243 [493/738] Generating lib/rte_port_mingw with a custom command 00:02:57.243 [494/738] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:57.243 [495/738] Generating lib/rte_pdump_def with a custom command 00:02:57.243 [496/738] Generating lib/rte_pdump_mingw with a custom command 00:02:57.243 [497/738] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:57.243 [498/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:57.500 [499/738] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:57.501 [500/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:57.501 [501/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:57.501 [502/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:57.501 [503/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:57.758 [504/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:57.758 [505/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:57.758 [506/738] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:57.758 [507/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:57.758 [508/738] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:58.016 [509/738] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:58.016 [510/738] Linking static target lib/librte_port.a 00:02:58.016 [511/738] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:58.016 [512/738] Linking static target lib/librte_pdump.a 00:02:58.302 [513/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:58.302 [514/738] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.302 [515/738] Linking target lib/librte_pdump.so.23.0 00:02:58.302 [516/738] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.302 [517/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:58.302 [518/738] Generating lib/rte_table_def with a custom command 00:02:58.302 [519/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:58.302 [520/738] Linking target lib/librte_port.so.23.0 00:02:58.302 [521/738] Generating lib/rte_table_mingw with a custom command 00:02:58.302 [522/738] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:58.302 [523/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:58.583 [524/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:58.583 [525/738] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:58.583 [526/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:58.583 [527/738] Generating lib/rte_pipeline_def with a custom command 00:02:58.583 [528/738] Generating lib/rte_pipeline_mingw with a custom command 00:02:58.583 [529/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:58.583 [530/738] Linking static target lib/librte_table.a 00:02:58.583 [531/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:58.841 [532/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:58.841 [533/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:58.841 [534/738] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:58.841 [535/738] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.100 [536/738] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:59.100 [537/738] Linking target lib/librte_table.so.23.0 00:02:59.100 [538/738] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:59.100 [539/738] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:59.100 [540/738] Generating lib/rte_graph_def with a custom command 00:02:59.100 [541/738] Generating lib/rte_graph_mingw with a custom command 00:02:59.100 [542/738] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:59.358 [543/738] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:59.358 [544/738] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:59.358 [545/738] Linking static target lib/librte_graph.a 00:02:59.358 [546/738] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:59.358 [547/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:59.358 [548/738] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:59.358 [549/738] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:59.358 [550/738] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:59.616 [551/738] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:59.616 [552/738] Generating lib/rte_node_def with a custom command 00:02:59.616 [553/738] Generating lib/rte_node_mingw with a custom command 00:02:59.616 [554/738] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:59.874 [555/738] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.874 [556/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:59.874 [557/738] Linking target lib/librte_graph.so.23.0 00:02:59.874 [558/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:59.874 [559/738] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:59.874 [560/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:59.874 [561/738] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:59.874 [562/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:59.874 [563/738] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:59.874 [564/738] Generating drivers/rte_bus_pci_def with a custom command 00:02:59.874 [565/738] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:59.874 [566/738] Generating drivers/rte_bus_vdev_def with a custom command 00:02:59.874 [567/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:59.874 [568/738] Generating drivers/rte_bus_vdev_mingw with a custom command 00:03:00.133 [569/738] Generating drivers/rte_mempool_ring_def with a custom command 00:03:00.133 [570/738] Generating drivers/rte_mempool_ring_mingw with a custom command 00:03:00.133 [571/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:00.133 [572/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:00.133 [573/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:00.133 [574/738] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:00.133 [575/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:00.133 [576/738] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:00.133 [577/738] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:00.133 [578/738] Linking static target lib/librte_node.a 00:03:00.392 [579/738] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:00.392 [580/738] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:00.392 [581/738] Linking static target drivers/librte_bus_vdev.a 00:03:00.392 [582/738] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:00.392 [583/738] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:00.392 [584/738] Linking static target drivers/librte_bus_pci.a 00:03:00.392 [585/738] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.392 [586/738] Linking target lib/librte_node.so.23.0 00:03:00.392 [587/738] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:00.392 [588/738] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:00.392 [589/738] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.650 [590/738] Linking target drivers/librte_bus_vdev.so.23.0 00:03:00.650 [591/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:00.650 [592/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:00.650 [593/738] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.650 [594/738] Linking target drivers/librte_bus_pci.so.23.0 00:03:00.650 [595/738] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:03:00.650 [596/738] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:00.650 [597/738] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:03:00.650 [598/738] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:00.650 [599/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:00.909 [600/738] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:00.909 [601/738] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:00.909 [602/738] Linking static target drivers/librte_mempool_ring.a 00:03:00.909 [603/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:00.909 [604/738] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:00.909 [605/738] Linking target drivers/librte_mempool_ring.so.23.0 00:03:01.168 [606/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:01.426 [607/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:01.426 [608/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:01.426 [609/738] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:01.684 [610/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:01.943 [611/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:01.943 [612/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:02.200 [613/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:02.200 [614/738] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:02.200 [615/738] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:02.457 [616/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:02.457 [617/738] Generating drivers/rte_net_i40e_def with a custom command 00:03:02.457 [618/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:02.457 [619/738] Generating drivers/rte_net_i40e_mingw with a custom command 00:03:03.023 [620/738] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:03.023 [621/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:03.023 [622/738] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:03.281 [623/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:03.281 [624/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:03.281 [625/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:03.281 [626/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:03.281 [627/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:03.281 [628/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:03.539 [629/738] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:03.539 [630/738] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:03.539 [631/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:03:03.798 [632/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:03.798 [633/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:03.798 [634/738] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:03.798 [635/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:04.055 [636/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:04.056 [637/738] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:04.056 [638/738] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:04.056 [639/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:04.056 [640/738] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:04.056 [641/738] Linking static target drivers/librte_net_i40e.a 00:03:04.313 [642/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:04.313 [643/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:04.313 [644/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:04.313 [645/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:04.570 [646/738] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.570 [647/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:04.570 [648/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:04.570 [649/738] Linking target drivers/librte_net_i40e.so.23.0 00:03:04.570 [650/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:04.828 [651/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:04.828 [652/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:04.828 [653/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:04.828 [654/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:04.828 [655/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:04.828 [656/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:05.086 [657/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:05.086 [658/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:05.086 [659/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:05.086 [660/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:05.344 [661/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:05.344 [662/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:05.344 [663/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:05.344 [664/738] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:05.601 [665/738] Linking static target lib/librte_vhost.a 00:03:05.601 [666/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:05.860 [667/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:05.860 [668/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:05.860 [669/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:06.129 [670/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:06.129 [671/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:06.129 [672/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:06.129 [673/738] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:06.129 [674/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:06.130 [675/738] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.130 [676/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:06.387 [677/738] Linking target lib/librte_vhost.so.23.0 00:03:06.387 [678/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:06.387 [679/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:06.387 [680/738] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:06.387 [681/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:06.644 [682/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:06.644 [683/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:06.644 [684/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:06.644 [685/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:06.644 [686/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:06.644 [687/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:06.901 [688/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:06.901 [689/738] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:06.901 [690/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:07.159 [691/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:07.159 [692/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:07.417 [693/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:07.417 [694/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:07.417 [695/738] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:07.417 [696/738] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:07.675 [697/738] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:07.675 [698/738] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:07.675 [699/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:07.932 [700/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:07.932 [701/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:08.190 [702/738] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:08.190 [703/738] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:08.190 [704/738] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:08.190 [705/738] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:08.448 [706/738] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:08.706 [707/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:08.706 [708/738] Linking static target lib/librte_pipeline.a 00:03:08.706 [709/738] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:08.706 [710/738] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:08.706 [711/738] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:08.964 [712/738] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:08.964 [713/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:08.964 [714/738] Linking target app/dpdk-dumpcap 00:03:08.964 [715/738] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:08.964 [716/738] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:08.964 [717/738] Linking target app/dpdk-pdump 00:03:08.964 [718/738] Linking target app/dpdk-proc-info 00:03:09.222 [719/738] Linking target app/dpdk-test-acl 00:03:09.222 [720/738] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:09.222 [721/738] Linking target app/dpdk-test-bbdev 00:03:09.222 [722/738] Linking target app/dpdk-test-cmdline 00:03:09.222 [723/738] Linking target app/dpdk-test-compress-perf 00:03:09.222 [724/738] Linking target app/dpdk-test-crypto-perf 00:03:09.222 [725/738] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:09.222 [726/738] Linking target app/dpdk-test-fib 00:03:09.222 [727/738] Linking target app/dpdk-test-eventdev 00:03:09.479 [728/738] Linking target app/dpdk-test-flow-perf 00:03:09.479 [729/738] Linking target app/dpdk-test-pipeline 00:03:09.479 [730/738] Linking target app/dpdk-test-gpudev 00:03:09.479 [731/738] Linking target app/dpdk-test-regex 00:03:09.479 [732/738] Linking target app/dpdk-test-sad 00:03:09.479 [733/738] Linking target app/dpdk-testpmd 00:03:09.735 [734/738] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:09.991 [735/738] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:10.247 [736/738] Linking target app/dpdk-test-security-perf 00:03:11.619 [737/738] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.619 [738/738] Linking target lib/librte_pipeline.so.23.0 00:03:11.619 23:08:35 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:11.619 23:08:35 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:11.619 23:08:35 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:11.619 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:11.619 [0/1] Installing files. 00:03:11.878 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:11.878 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:11.878 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:11.878 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:11.878 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:11.878 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:11.878 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:11.878 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:11.878 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:11.878 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:11.878 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:11.878 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:11.878 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:11.878 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:11.878 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:11.878 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:11.878 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:11.878 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:11.878 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:11.878 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:11.878 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:11.878 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:11.878 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:11.878 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:11.878 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:11.878 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:11.878 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:11.878 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.879 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.140 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.140 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.140 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.140 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.140 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.140 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.140 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.140 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.140 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.140 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.140 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.140 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.140 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.140 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.140 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.140 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.140 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.140 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.140 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.140 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.140 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.140 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.140 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.140 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.140 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.140 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.140 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.140 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.140 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:12.141 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.142 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:12.143 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:12.143 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.143 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.143 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.143 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.143 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.143 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.143 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.143 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.143 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.143 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.143 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.143 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.143 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.143 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.143 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.143 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.143 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.143 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.143 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.143 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.143 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.143 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.143 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.143 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.143 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.143 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.143 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.143 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.143 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:12.144 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:12.144 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:12.144 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.144 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:12.144 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.144 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.144 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.144 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.144 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.144 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.144 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.144 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.144 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.404 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.404 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.404 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.404 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.404 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.404 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.404 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.404 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.404 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.405 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.406 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.407 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.407 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.407 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.407 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.407 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.407 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.407 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.407 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.407 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.407 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.407 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.407 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.407 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.407 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.407 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.407 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.407 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.407 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.407 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.407 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.407 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.407 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.407 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.407 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.407 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.407 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.407 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.407 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.407 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.407 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.407 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.407 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.407 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.407 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.407 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.407 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.407 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:12.407 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:12.407 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:12.407 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:12.407 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:12.407 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:12.407 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:12.407 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:12.407 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:12.407 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:12.407 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:12.407 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:12.407 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:12.407 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:12.407 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:12.407 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:12.407 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:12.407 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:12.407 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:12.407 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:12.407 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:12.407 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:12.407 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:12.407 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:12.407 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:12.407 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:12.407 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:12.407 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:12.407 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:12.407 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:12.407 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:12.407 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:12.407 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:12.407 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:12.407 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:12.407 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:12.407 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:12.407 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:12.407 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:12.407 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:12.407 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:12.407 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:12.407 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:12.407 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:12.407 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:12.407 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:12.407 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:12.407 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:12.407 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:12.407 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:12.407 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:12.407 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:12.407 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:12.407 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:12.407 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:12.407 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:12.407 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:12.407 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:12.407 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:12.407 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:12.407 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:12.407 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:12.407 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:12.407 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:12.407 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:12.407 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:12.407 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:12.407 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:12.407 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:12.407 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:12.407 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:12.407 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:12.408 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:12.408 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:12.408 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:12.408 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:12.408 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:12.408 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:12.408 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:12.408 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:12.408 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:12.408 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:12.408 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:12.408 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:12.408 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:12.408 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:12.408 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:12.408 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:12.408 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:12.408 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:12.408 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:12.408 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:12.408 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:12.408 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:12.408 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:12.408 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:12.408 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:12.408 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:12.408 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:12.408 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:12.408 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:12.408 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:12.408 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:12.408 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:12.408 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:12.408 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:12.408 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:12.408 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:12.408 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:12.408 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:12.408 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:12.408 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:12.408 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:12.408 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:12.408 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:12.408 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:12.408 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:12.408 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:12.408 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:12.408 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:12.408 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:12.408 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:12.408 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:12.408 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:12.408 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:12.408 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:12.408 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:12.408 23:08:36 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:12.408 ************************************ 00:03:12.408 END TEST build_native_dpdk 00:03:12.408 ************************************ 00:03:12.408 23:08:36 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:12.408 00:03:12.408 real 0m34.355s 00:03:12.408 user 3m41.061s 00:03:12.408 sys 0m34.249s 00:03:12.408 23:08:36 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:12.408 23:08:36 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:12.408 23:08:36 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:12.408 23:08:36 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:12.408 23:08:36 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:12.408 23:08:36 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:12.408 23:08:36 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:12.408 23:08:36 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:12.408 23:08:36 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:12.408 23:08:36 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:12.408 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:12.665 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.665 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:12.665 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:12.925 Using 'verbs' RDMA provider 00:03:23.866 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:36.180 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:36.180 Creating mk/config.mk...done. 00:03:36.180 Creating mk/cc.flags.mk...done. 00:03:36.180 Type 'make' to build. 00:03:36.180 23:08:58 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:36.180 23:08:58 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:36.181 23:08:58 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:36.181 23:08:58 -- common/autotest_common.sh@10 -- $ set +x 00:03:36.181 ************************************ 00:03:36.181 START TEST make 00:03:36.181 ************************************ 00:03:36.181 23:08:58 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:36.181 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:36.181 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:36.181 meson setup builddir \ 00:03:36.181 -Dwith-libaio=enabled \ 00:03:36.181 -Dwith-liburing=enabled \ 00:03:36.181 -Dwith-libvfn=disabled \ 00:03:36.181 -Dwith-spdk=disabled \ 00:03:36.181 -Dexamples=false \ 00:03:36.181 -Dtests=false \ 00:03:36.181 -Dtools=false && \ 00:03:36.181 meson compile -C builddir && \ 00:03:36.181 cd -) 00:03:36.181 make[1]: Nothing to be done for 'all'. 00:03:37.124 The Meson build system 00:03:37.124 Version: 1.5.0 00:03:37.124 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:37.124 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:37.124 Build type: native build 00:03:37.124 Project name: xnvme 00:03:37.124 Project version: 0.7.5 00:03:37.124 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:37.124 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:37.124 Host machine cpu family: x86_64 00:03:37.124 Host machine cpu: x86_64 00:03:37.124 Message: host_machine.system: linux 00:03:37.124 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:37.124 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:37.124 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:37.124 Run-time dependency threads found: YES 00:03:37.124 Has header "setupapi.h" : NO 00:03:37.124 Has header "linux/blkzoned.h" : YES 00:03:37.124 Has header "linux/blkzoned.h" : YES (cached) 00:03:37.124 Has header "libaio.h" : YES 00:03:37.124 Library aio found: YES 00:03:37.124 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:37.124 Run-time dependency liburing found: YES 2.2 00:03:37.124 Dependency libvfn skipped: feature with-libvfn disabled 00:03:37.124 Found CMake: /usr/bin/cmake (3.27.7) 00:03:37.124 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:03:37.124 Subproject spdk : skipped: feature with-spdk disabled 00:03:37.124 Run-time dependency appleframeworks found: NO (tried framework) 00:03:37.124 Run-time dependency appleframeworks found: NO (tried framework) 00:03:37.124 Library rt found: YES 00:03:37.124 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:37.124 Configuring xnvme_config.h using configuration 00:03:37.124 Configuring xnvme.spec using configuration 00:03:37.124 Run-time dependency bash-completion found: YES 2.11 00:03:37.124 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:37.124 Program cp found: YES (/usr/bin/cp) 00:03:37.124 Build targets in project: 3 00:03:37.124 00:03:37.124 xnvme 0.7.5 00:03:37.124 00:03:37.124 Subprojects 00:03:37.124 spdk : NO Feature 'with-spdk' disabled 00:03:37.124 00:03:37.124 User defined options 00:03:37.124 examples : false 00:03:37.124 tests : false 00:03:37.124 tools : false 00:03:37.124 with-libaio : enabled 00:03:37.124 with-liburing: enabled 00:03:37.124 with-libvfn : disabled 00:03:37.124 with-spdk : disabled 00:03:37.124 00:03:37.124 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:37.694 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:37.694 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:03:37.694 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:03:37.694 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:03:37.694 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:03:37.694 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:03:37.694 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:03:37.694 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:03:37.694 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:03:37.695 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:03:37.695 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:03:37.695 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:03:37.695 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:03:37.695 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:03:37.695 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:03:37.695 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:03:37.695 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:03:37.955 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:03:37.955 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:03:37.955 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:03:37.955 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:03:37.955 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:03:37.955 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:03:37.955 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:03:37.955 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:03:37.955 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:03:37.955 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:03:37.955 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:03:37.955 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:03:37.955 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:03:37.955 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:03:37.955 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:03:37.955 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:03:37.955 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:03:37.955 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:03:37.955 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:03:37.955 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:03:37.955 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:03:37.955 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:03:37.955 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:03:37.955 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:03:37.955 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:03:37.955 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:03:37.955 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:03:37.955 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:03:37.955 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:03:37.955 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:03:37.955 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:03:37.955 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:03:37.955 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:03:37.955 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:03:37.955 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:03:37.955 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:03:38.216 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:03:38.216 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:03:38.216 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:03:38.216 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:03:38.216 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:03:38.216 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:03:38.216 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:03:38.216 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:03:38.216 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:03:38.216 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:03:38.216 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:03:38.216 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:03:38.216 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:03:38.216 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:03:38.216 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:03:38.216 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:03:38.216 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:03:38.216 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:03:38.476 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:03:38.476 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:03:38.476 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:03:38.738 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:03:38.738 [75/76] Linking static target lib/libxnvme.a 00:03:38.738 [76/76] Linking target lib/libxnvme.so.0.7.5 00:03:38.738 INFO: autodetecting backend as ninja 00:03:38.738 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:38.738 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:10.826 CC lib/ut/ut.o 00:04:10.826 CC lib/ut_mock/mock.o 00:04:10.826 CC lib/log/log.o 00:04:10.826 CC lib/log/log_flags.o 00:04:10.826 CC lib/log/log_deprecated.o 00:04:10.826 LIB libspdk_ut.a 00:04:10.826 LIB libspdk_ut_mock.a 00:04:10.826 LIB libspdk_log.a 00:04:10.826 SO libspdk_ut_mock.so.6.0 00:04:10.826 SO libspdk_ut.so.2.0 00:04:10.826 SO libspdk_log.so.7.1 00:04:11.085 SYMLINK libspdk_ut_mock.so 00:04:11.085 SYMLINK libspdk_ut.so 00:04:11.085 SYMLINK libspdk_log.so 00:04:11.085 CC lib/ioat/ioat.o 00:04:11.085 CC lib/dma/dma.o 00:04:11.085 CC lib/util/base64.o 00:04:11.085 CC lib/util/cpuset.o 00:04:11.085 CC lib/util/bit_array.o 00:04:11.085 CC lib/util/crc16.o 00:04:11.085 CC lib/util/crc32.o 00:04:11.085 CC lib/util/crc32c.o 00:04:11.085 CXX lib/trace_parser/trace.o 00:04:11.343 CC lib/vfio_user/host/vfio_user_pci.o 00:04:11.343 CC lib/util/crc32_ieee.o 00:04:11.343 CC lib/util/crc64.o 00:04:11.343 CC lib/vfio_user/host/vfio_user.o 00:04:11.343 CC lib/util/dif.o 00:04:11.343 LIB libspdk_dma.a 00:04:11.343 CC lib/util/fd.o 00:04:11.343 CC lib/util/fd_group.o 00:04:11.343 SO libspdk_dma.so.5.0 00:04:11.343 CC lib/util/file.o 00:04:11.343 CC lib/util/hexlify.o 00:04:11.343 SYMLINK libspdk_dma.so 00:04:11.343 LIB libspdk_ioat.a 00:04:11.343 CC lib/util/iov.o 00:04:11.343 SO libspdk_ioat.so.7.0 00:04:11.343 CC lib/util/math.o 00:04:11.343 LIB libspdk_vfio_user.a 00:04:11.343 CC lib/util/net.o 00:04:11.603 SYMLINK libspdk_ioat.so 00:04:11.603 CC lib/util/pipe.o 00:04:11.603 SO libspdk_vfio_user.so.5.0 00:04:11.603 CC lib/util/strerror_tls.o 00:04:11.603 CC lib/util/string.o 00:04:11.603 SYMLINK libspdk_vfio_user.so 00:04:11.603 CC lib/util/uuid.o 00:04:11.603 CC lib/util/xor.o 00:04:11.603 CC lib/util/zipf.o 00:04:11.603 CC lib/util/md5.o 00:04:11.862 LIB libspdk_util.a 00:04:11.862 SO libspdk_util.so.10.1 00:04:12.120 LIB libspdk_trace_parser.a 00:04:12.120 SO libspdk_trace_parser.so.6.0 00:04:12.120 SYMLINK libspdk_util.so 00:04:12.120 SYMLINK libspdk_trace_parser.so 00:04:12.120 CC lib/rdma_utils/rdma_utils.o 00:04:12.120 CC lib/json/json_util.o 00:04:12.120 CC lib/json/json_write.o 00:04:12.120 CC lib/conf/conf.o 00:04:12.120 CC lib/json/json_parse.o 00:04:12.120 CC lib/vmd/vmd.o 00:04:12.120 CC lib/vmd/led.o 00:04:12.120 CC lib/env_dpdk/env.o 00:04:12.120 CC lib/env_dpdk/memory.o 00:04:12.120 CC lib/idxd/idxd.o 00:04:12.376 CC lib/env_dpdk/pci.o 00:04:12.376 LIB libspdk_conf.a 00:04:12.376 CC lib/idxd/idxd_user.o 00:04:12.376 CC lib/idxd/idxd_kernel.o 00:04:12.376 LIB libspdk_json.a 00:04:12.376 SO libspdk_conf.so.6.0 00:04:12.376 SO libspdk_json.so.6.0 00:04:12.376 LIB libspdk_rdma_utils.a 00:04:12.376 SO libspdk_rdma_utils.so.1.0 00:04:12.376 SYMLINK libspdk_conf.so 00:04:12.376 SYMLINK libspdk_json.so 00:04:12.376 CC lib/env_dpdk/init.o 00:04:12.376 CC lib/env_dpdk/threads.o 00:04:12.633 CC lib/env_dpdk/pci_ioat.o 00:04:12.633 SYMLINK libspdk_rdma_utils.so 00:04:12.633 CC lib/env_dpdk/pci_virtio.o 00:04:12.633 CC lib/env_dpdk/pci_vmd.o 00:04:12.633 CC lib/env_dpdk/pci_idxd.o 00:04:12.633 CC lib/jsonrpc/jsonrpc_server.o 00:04:12.633 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:12.633 CC lib/rdma_provider/common.o 00:04:12.633 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:12.633 CC lib/jsonrpc/jsonrpc_client.o 00:04:12.633 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:12.890 CC lib/env_dpdk/pci_event.o 00:04:12.890 LIB libspdk_idxd.a 00:04:12.890 CC lib/env_dpdk/sigbus_handler.o 00:04:12.890 LIB libspdk_vmd.a 00:04:12.890 SO libspdk_idxd.so.12.1 00:04:12.890 SO libspdk_vmd.so.6.0 00:04:12.890 CC lib/env_dpdk/pci_dpdk.o 00:04:12.890 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:12.890 SYMLINK libspdk_idxd.so 00:04:12.890 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:12.890 LIB libspdk_rdma_provider.a 00:04:12.890 SYMLINK libspdk_vmd.so 00:04:12.890 SO libspdk_rdma_provider.so.7.0 00:04:12.890 LIB libspdk_jsonrpc.a 00:04:12.890 SO libspdk_jsonrpc.so.6.0 00:04:12.890 SYMLINK libspdk_rdma_provider.so 00:04:13.148 SYMLINK libspdk_jsonrpc.so 00:04:13.148 CC lib/rpc/rpc.o 00:04:13.407 LIB libspdk_rpc.a 00:04:13.407 SO libspdk_rpc.so.6.0 00:04:13.665 SYMLINK libspdk_rpc.so 00:04:13.665 LIB libspdk_env_dpdk.a 00:04:13.665 SO libspdk_env_dpdk.so.15.1 00:04:13.665 CC lib/notify/notify.o 00:04:13.665 CC lib/notify/notify_rpc.o 00:04:13.665 CC lib/keyring/keyring.o 00:04:13.665 CC lib/keyring/keyring_rpc.o 00:04:13.665 CC lib/trace/trace_flags.o 00:04:13.665 CC lib/trace/trace.o 00:04:13.665 CC lib/trace/trace_rpc.o 00:04:13.665 SYMLINK libspdk_env_dpdk.so 00:04:13.922 LIB libspdk_notify.a 00:04:13.922 SO libspdk_notify.so.6.0 00:04:13.922 SYMLINK libspdk_notify.so 00:04:13.922 LIB libspdk_keyring.a 00:04:13.922 SO libspdk_keyring.so.2.0 00:04:13.922 LIB libspdk_trace.a 00:04:13.922 SO libspdk_trace.so.11.0 00:04:13.922 SYMLINK libspdk_keyring.so 00:04:14.177 SYMLINK libspdk_trace.so 00:04:14.177 CC lib/sock/sock.o 00:04:14.177 CC lib/sock/sock_rpc.o 00:04:14.177 CC lib/thread/iobuf.o 00:04:14.177 CC lib/thread/thread.o 00:04:14.741 LIB libspdk_sock.a 00:04:14.741 SO libspdk_sock.so.10.0 00:04:14.741 SYMLINK libspdk_sock.so 00:04:14.997 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:14.998 CC lib/nvme/nvme_ctrlr.o 00:04:14.998 CC lib/nvme/nvme_ns_cmd.o 00:04:14.998 CC lib/nvme/nvme_fabric.o 00:04:14.998 CC lib/nvme/nvme.o 00:04:14.998 CC lib/nvme/nvme_pcie.o 00:04:14.998 CC lib/nvme/nvme_qpair.o 00:04:14.998 CC lib/nvme/nvme_ns.o 00:04:14.998 CC lib/nvme/nvme_pcie_common.o 00:04:15.563 LIB libspdk_thread.a 00:04:15.563 CC lib/nvme/nvme_quirks.o 00:04:15.563 SO libspdk_thread.so.11.0 00:04:15.563 CC lib/nvme/nvme_transport.o 00:04:15.563 CC lib/nvme/nvme_discovery.o 00:04:15.563 SYMLINK libspdk_thread.so 00:04:15.563 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:15.563 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:15.563 CC lib/nvme/nvme_tcp.o 00:04:15.822 CC lib/nvme/nvme_opal.o 00:04:15.822 CC lib/nvme/nvme_io_msg.o 00:04:15.822 CC lib/accel/accel.o 00:04:15.822 CC lib/accel/accel_rpc.o 00:04:15.822 CC lib/nvme/nvme_poll_group.o 00:04:16.080 CC lib/blob/blobstore.o 00:04:16.080 CC lib/accel/accel_sw.o 00:04:16.080 CC lib/init/json_config.o 00:04:16.080 CC lib/nvme/nvme_zns.o 00:04:16.337 CC lib/init/subsystem.o 00:04:16.337 CC lib/nvme/nvme_stubs.o 00:04:16.337 CC lib/nvme/nvme_auth.o 00:04:16.337 CC lib/init/subsystem_rpc.o 00:04:16.597 CC lib/virtio/virtio.o 00:04:16.597 CC lib/fsdev/fsdev.o 00:04:16.597 CC lib/init/rpc.o 00:04:16.597 CC lib/fsdev/fsdev_io.o 00:04:16.597 LIB libspdk_init.a 00:04:16.856 SO libspdk_init.so.6.0 00:04:16.856 CC lib/virtio/virtio_vhost_user.o 00:04:16.856 CC lib/virtio/virtio_vfio_user.o 00:04:16.856 SYMLINK libspdk_init.so 00:04:16.856 CC lib/nvme/nvme_cuse.o 00:04:16.856 LIB libspdk_accel.a 00:04:17.116 SO libspdk_accel.so.16.0 00:04:17.116 CC lib/nvme/nvme_rdma.o 00:04:17.116 CC lib/fsdev/fsdev_rpc.o 00:04:17.116 CC lib/blob/request.o 00:04:17.117 SYMLINK libspdk_accel.so 00:04:17.117 CC lib/virtio/virtio_pci.o 00:04:17.117 CC lib/blob/zeroes.o 00:04:17.117 CC lib/blob/blob_bs_dev.o 00:04:17.117 LIB libspdk_fsdev.a 00:04:17.117 CC lib/event/app.o 00:04:17.117 SO libspdk_fsdev.so.2.0 00:04:17.417 CC lib/event/reactor.o 00:04:17.417 CC lib/event/log_rpc.o 00:04:17.417 SYMLINK libspdk_fsdev.so 00:04:17.417 CC lib/event/app_rpc.o 00:04:17.417 LIB libspdk_virtio.a 00:04:17.417 CC lib/event/scheduler_static.o 00:04:17.417 CC lib/bdev/bdev.o 00:04:17.417 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:17.417 SO libspdk_virtio.so.7.0 00:04:17.417 SYMLINK libspdk_virtio.so 00:04:17.417 CC lib/bdev/bdev_rpc.o 00:04:17.677 CC lib/bdev/bdev_zone.o 00:04:17.677 CC lib/bdev/part.o 00:04:17.677 CC lib/bdev/scsi_nvme.o 00:04:17.677 LIB libspdk_event.a 00:04:17.677 SO libspdk_event.so.14.0 00:04:17.677 SYMLINK libspdk_event.so 00:04:17.938 LIB libspdk_fuse_dispatcher.a 00:04:17.939 SO libspdk_fuse_dispatcher.so.1.0 00:04:17.939 SYMLINK libspdk_fuse_dispatcher.so 00:04:18.510 LIB libspdk_nvme.a 00:04:18.510 SO libspdk_nvme.so.15.0 00:04:18.768 LIB libspdk_blob.a 00:04:18.768 SO libspdk_blob.so.11.0 00:04:18.768 SYMLINK libspdk_nvme.so 00:04:18.768 SYMLINK libspdk_blob.so 00:04:19.029 CC lib/lvol/lvol.o 00:04:19.029 CC lib/blobfs/blobfs.o 00:04:19.029 CC lib/blobfs/tree.o 00:04:19.599 LIB libspdk_blobfs.a 00:04:19.599 SO libspdk_blobfs.so.10.0 00:04:19.860 SYMLINK libspdk_blobfs.so 00:04:19.860 LIB libspdk_bdev.a 00:04:19.860 SO libspdk_bdev.so.17.0 00:04:19.860 LIB libspdk_lvol.a 00:04:19.860 SO libspdk_lvol.so.10.0 00:04:20.122 SYMLINK libspdk_bdev.so 00:04:20.122 SYMLINK libspdk_lvol.so 00:04:20.122 CC lib/scsi/dev.o 00:04:20.122 CC lib/ublk/ublk.o 00:04:20.122 CC lib/ublk/ublk_rpc.o 00:04:20.122 CC lib/scsi/scsi.o 00:04:20.122 CC lib/scsi/port.o 00:04:20.122 CC lib/scsi/lun.o 00:04:20.122 CC lib/scsi/scsi_bdev.o 00:04:20.122 CC lib/nvmf/ctrlr.o 00:04:20.122 CC lib/ftl/ftl_core.o 00:04:20.122 CC lib/nbd/nbd.o 00:04:20.383 CC lib/nbd/nbd_rpc.o 00:04:20.383 CC lib/scsi/scsi_pr.o 00:04:20.383 CC lib/scsi/scsi_rpc.o 00:04:20.383 CC lib/nvmf/ctrlr_discovery.o 00:04:20.383 CC lib/nvmf/ctrlr_bdev.o 00:04:20.383 CC lib/nvmf/subsystem.o 00:04:20.383 LIB libspdk_nbd.a 00:04:20.383 CC lib/scsi/task.o 00:04:20.383 SO libspdk_nbd.so.7.0 00:04:20.645 SYMLINK libspdk_nbd.so 00:04:20.645 CC lib/ftl/ftl_init.o 00:04:20.645 CC lib/nvmf/nvmf.o 00:04:20.645 CC lib/ftl/ftl_layout.o 00:04:20.645 CC lib/nvmf/nvmf_rpc.o 00:04:20.645 LIB libspdk_ublk.a 00:04:20.645 SO libspdk_ublk.so.3.0 00:04:20.645 CC lib/ftl/ftl_debug.o 00:04:20.645 LIB libspdk_scsi.a 00:04:20.645 SYMLINK libspdk_ublk.so 00:04:20.645 SO libspdk_scsi.so.9.0 00:04:20.645 CC lib/nvmf/transport.o 00:04:20.645 CC lib/nvmf/tcp.o 00:04:20.907 SYMLINK libspdk_scsi.so 00:04:20.907 CC lib/nvmf/stubs.o 00:04:20.907 CC lib/ftl/ftl_io.o 00:04:20.907 CC lib/ftl/ftl_sb.o 00:04:21.168 CC lib/ftl/ftl_l2p.o 00:04:21.168 CC lib/ftl/ftl_l2p_flat.o 00:04:21.169 CC lib/nvmf/mdns_server.o 00:04:21.169 CC lib/nvmf/rdma.o 00:04:21.169 CC lib/nvmf/auth.o 00:04:21.169 CC lib/ftl/ftl_nv_cache.o 00:04:21.169 CC lib/ftl/ftl_band.o 00:04:21.430 CC lib/iscsi/conn.o 00:04:21.430 CC lib/iscsi/init_grp.o 00:04:21.691 CC lib/iscsi/iscsi.o 00:04:21.691 CC lib/iscsi/param.o 00:04:21.691 CC lib/ftl/ftl_band_ops.o 00:04:21.691 CC lib/ftl/ftl_writer.o 00:04:21.691 CC lib/ftl/ftl_rq.o 00:04:21.952 CC lib/ftl/ftl_reloc.o 00:04:21.952 CC lib/ftl/ftl_l2p_cache.o 00:04:21.952 CC lib/ftl/ftl_p2l.o 00:04:21.952 CC lib/ftl/ftl_p2l_log.o 00:04:21.952 CC lib/ftl/mngt/ftl_mngt.o 00:04:21.952 CC lib/iscsi/portal_grp.o 00:04:21.952 CC lib/vhost/vhost.o 00:04:21.952 CC lib/vhost/vhost_rpc.o 00:04:22.213 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:22.213 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:22.213 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:22.213 CC lib/iscsi/tgt_node.o 00:04:22.213 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:22.213 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:22.474 CC lib/iscsi/iscsi_subsystem.o 00:04:22.474 CC lib/iscsi/iscsi_rpc.o 00:04:22.474 CC lib/iscsi/task.o 00:04:22.474 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:22.474 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:22.474 CC lib/vhost/vhost_scsi.o 00:04:22.474 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:22.733 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:22.733 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:22.733 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:22.733 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:22.733 CC lib/vhost/vhost_blk.o 00:04:22.733 CC lib/vhost/rte_vhost_user.o 00:04:22.733 LIB libspdk_iscsi.a 00:04:22.733 CC lib/ftl/utils/ftl_conf.o 00:04:22.733 CC lib/ftl/utils/ftl_md.o 00:04:22.991 LIB libspdk_nvmf.a 00:04:22.991 SO libspdk_iscsi.so.8.0 00:04:22.991 CC lib/ftl/utils/ftl_mempool.o 00:04:22.991 CC lib/ftl/utils/ftl_bitmap.o 00:04:22.991 SO libspdk_nvmf.so.20.0 00:04:22.991 SYMLINK libspdk_iscsi.so 00:04:22.991 CC lib/ftl/utils/ftl_property.o 00:04:22.991 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:22.991 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:22.991 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:22.991 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:23.248 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:23.248 SYMLINK libspdk_nvmf.so 00:04:23.248 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:23.248 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:23.248 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:23.248 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:23.248 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:23.248 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:23.248 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:23.248 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:23.248 CC lib/ftl/base/ftl_base_dev.o 00:04:23.248 CC lib/ftl/base/ftl_base_bdev.o 00:04:23.505 CC lib/ftl/ftl_trace.o 00:04:23.505 LIB libspdk_vhost.a 00:04:23.505 SO libspdk_vhost.so.8.0 00:04:23.505 LIB libspdk_ftl.a 00:04:23.762 SYMLINK libspdk_vhost.so 00:04:23.762 SO libspdk_ftl.so.9.0 00:04:24.020 SYMLINK libspdk_ftl.so 00:04:24.278 CC module/env_dpdk/env_dpdk_rpc.o 00:04:24.278 CC module/accel/error/accel_error.o 00:04:24.278 CC module/blob/bdev/blob_bdev.o 00:04:24.278 CC module/keyring/file/keyring.o 00:04:24.278 CC module/accel/dsa/accel_dsa.o 00:04:24.278 CC module/sock/posix/posix.o 00:04:24.278 CC module/fsdev/aio/fsdev_aio.o 00:04:24.278 CC module/accel/iaa/accel_iaa.o 00:04:24.278 CC module/accel/ioat/accel_ioat.o 00:04:24.278 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:24.536 LIB libspdk_env_dpdk_rpc.a 00:04:24.536 SO libspdk_env_dpdk_rpc.so.6.0 00:04:24.536 CC module/keyring/file/keyring_rpc.o 00:04:24.536 SYMLINK libspdk_env_dpdk_rpc.so 00:04:24.536 CC module/accel/dsa/accel_dsa_rpc.o 00:04:24.536 LIB libspdk_scheduler_dynamic.a 00:04:24.536 CC module/accel/ioat/accel_ioat_rpc.o 00:04:24.536 SO libspdk_scheduler_dynamic.so.4.0 00:04:24.536 CC module/accel/iaa/accel_iaa_rpc.o 00:04:24.536 LIB libspdk_keyring_file.a 00:04:24.536 CC module/accel/error/accel_error_rpc.o 00:04:24.536 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:24.536 SO libspdk_keyring_file.so.2.0 00:04:24.536 SYMLINK libspdk_scheduler_dynamic.so 00:04:24.536 LIB libspdk_accel_dsa.a 00:04:24.536 LIB libspdk_blob_bdev.a 00:04:24.536 SYMLINK libspdk_keyring_file.so 00:04:24.536 SO libspdk_blob_bdev.so.11.0 00:04:24.536 SO libspdk_accel_dsa.so.5.0 00:04:24.536 LIB libspdk_accel_ioat.a 00:04:24.536 SO libspdk_accel_ioat.so.6.0 00:04:24.794 SYMLINK libspdk_blob_bdev.so 00:04:24.794 SYMLINK libspdk_accel_dsa.so 00:04:24.794 LIB libspdk_accel_iaa.a 00:04:24.794 CC module/fsdev/aio/linux_aio_mgr.o 00:04:24.794 LIB libspdk_accel_error.a 00:04:24.794 SO libspdk_accel_iaa.so.3.0 00:04:24.794 SO libspdk_accel_error.so.2.0 00:04:24.794 SYMLINK libspdk_accel_ioat.so 00:04:24.794 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:24.794 SYMLINK libspdk_accel_iaa.so 00:04:24.794 SYMLINK libspdk_accel_error.so 00:04:24.794 CC module/keyring/linux/keyring.o 00:04:24.794 CC module/keyring/linux/keyring_rpc.o 00:04:24.794 CC module/scheduler/gscheduler/gscheduler.o 00:04:24.794 LIB libspdk_scheduler_dpdk_governor.a 00:04:24.794 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:24.794 LIB libspdk_keyring_linux.a 00:04:24.794 LIB libspdk_fsdev_aio.a 00:04:24.794 CC module/bdev/error/vbdev_error.o 00:04:24.794 SO libspdk_keyring_linux.so.1.0 00:04:24.794 CC module/bdev/delay/vbdev_delay.o 00:04:25.054 SO libspdk_fsdev_aio.so.1.0 00:04:25.054 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:25.054 CC module/bdev/error/vbdev_error_rpc.o 00:04:25.054 LIB libspdk_sock_posix.a 00:04:25.054 CC module/blobfs/bdev/blobfs_bdev.o 00:04:25.054 SYMLINK libspdk_keyring_linux.so 00:04:25.054 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:25.054 SO libspdk_sock_posix.so.6.0 00:04:25.054 CC module/bdev/gpt/gpt.o 00:04:25.054 LIB libspdk_scheduler_gscheduler.a 00:04:25.054 CC module/bdev/lvol/vbdev_lvol.o 00:04:25.054 SYMLINK libspdk_fsdev_aio.so 00:04:25.054 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:25.054 SO libspdk_scheduler_gscheduler.so.4.0 00:04:25.054 SYMLINK libspdk_sock_posix.so 00:04:25.054 SYMLINK libspdk_scheduler_gscheduler.so 00:04:25.054 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:25.054 LIB libspdk_blobfs_bdev.a 00:04:25.054 CC module/bdev/gpt/vbdev_gpt.o 00:04:25.054 SO libspdk_blobfs_bdev.so.6.0 00:04:25.341 LIB libspdk_bdev_error.a 00:04:25.341 SYMLINK libspdk_blobfs_bdev.so 00:04:25.341 SO libspdk_bdev_error.so.6.0 00:04:25.341 LIB libspdk_bdev_delay.a 00:04:25.341 CC module/bdev/malloc/bdev_malloc.o 00:04:25.341 CC module/bdev/null/bdev_null.o 00:04:25.341 CC module/bdev/nvme/bdev_nvme.o 00:04:25.341 SO libspdk_bdev_delay.so.6.0 00:04:25.341 SYMLINK libspdk_bdev_error.so 00:04:25.341 CC module/bdev/null/bdev_null_rpc.o 00:04:25.341 SYMLINK libspdk_bdev_delay.so 00:04:25.341 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:25.341 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:25.341 CC module/bdev/raid/bdev_raid.o 00:04:25.341 CC module/bdev/passthru/vbdev_passthru.o 00:04:25.341 LIB libspdk_bdev_gpt.a 00:04:25.341 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:25.341 SO libspdk_bdev_gpt.so.6.0 00:04:25.341 LIB libspdk_bdev_null.a 00:04:25.600 CC module/bdev/raid/bdev_raid_rpc.o 00:04:25.600 SO libspdk_bdev_null.so.6.0 00:04:25.600 SYMLINK libspdk_bdev_gpt.so 00:04:25.600 CC module/bdev/raid/bdev_raid_sb.o 00:04:25.600 LIB libspdk_bdev_lvol.a 00:04:25.600 LIB libspdk_bdev_malloc.a 00:04:25.600 CC module/bdev/raid/raid0.o 00:04:25.600 SYMLINK libspdk_bdev_null.so 00:04:25.600 CC module/bdev/raid/raid1.o 00:04:25.600 SO libspdk_bdev_lvol.so.6.0 00:04:25.600 SO libspdk_bdev_malloc.so.6.0 00:04:25.600 LIB libspdk_bdev_passthru.a 00:04:25.600 SO libspdk_bdev_passthru.so.6.0 00:04:25.600 SYMLINK libspdk_bdev_lvol.so 00:04:25.600 CC module/bdev/raid/concat.o 00:04:25.600 SYMLINK libspdk_bdev_malloc.so 00:04:25.600 SYMLINK libspdk_bdev_passthru.so 00:04:25.861 CC module/bdev/split/vbdev_split.o 00:04:25.861 CC module/bdev/split/vbdev_split_rpc.o 00:04:25.861 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:25.861 CC module/bdev/nvme/nvme_rpc.o 00:04:25.861 CC module/bdev/xnvme/bdev_xnvme.o 00:04:25.861 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:25.861 CC module/bdev/aio/bdev_aio.o 00:04:25.861 CC module/bdev/aio/bdev_aio_rpc.o 00:04:25.861 LIB libspdk_bdev_split.a 00:04:25.861 SO libspdk_bdev_split.so.6.0 00:04:25.861 CC module/bdev/nvme/bdev_mdns_client.o 00:04:25.861 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:26.122 SYMLINK libspdk_bdev_split.so 00:04:26.122 CC module/bdev/nvme/vbdev_opal.o 00:04:26.122 LIB libspdk_bdev_xnvme.a 00:04:26.122 SO libspdk_bdev_xnvme.so.3.0 00:04:26.122 LIB libspdk_bdev_zone_block.a 00:04:26.122 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:26.122 LIB libspdk_bdev_aio.a 00:04:26.122 SO libspdk_bdev_zone_block.so.6.0 00:04:26.122 SYMLINK libspdk_bdev_xnvme.so 00:04:26.122 SO libspdk_bdev_aio.so.6.0 00:04:26.122 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:26.122 SYMLINK libspdk_bdev_zone_block.so 00:04:26.122 CC module/bdev/iscsi/bdev_iscsi.o 00:04:26.122 CC module/bdev/ftl/bdev_ftl.o 00:04:26.122 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:26.122 SYMLINK libspdk_bdev_aio.so 00:04:26.122 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:26.122 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:26.122 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:26.384 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:26.384 LIB libspdk_bdev_raid.a 00:04:26.384 SO libspdk_bdev_raid.so.6.0 00:04:26.384 LIB libspdk_bdev_ftl.a 00:04:26.645 SO libspdk_bdev_ftl.so.6.0 00:04:26.645 SYMLINK libspdk_bdev_raid.so 00:04:26.645 LIB libspdk_bdev_iscsi.a 00:04:26.645 SYMLINK libspdk_bdev_ftl.so 00:04:26.645 SO libspdk_bdev_iscsi.so.6.0 00:04:26.645 SYMLINK libspdk_bdev_iscsi.so 00:04:26.645 LIB libspdk_bdev_virtio.a 00:04:26.645 SO libspdk_bdev_virtio.so.6.0 00:04:26.645 SYMLINK libspdk_bdev_virtio.so 00:04:27.215 LIB libspdk_bdev_nvme.a 00:04:27.475 SO libspdk_bdev_nvme.so.7.1 00:04:27.475 SYMLINK libspdk_bdev_nvme.so 00:04:27.735 CC module/event/subsystems/fsdev/fsdev.o 00:04:27.735 CC module/event/subsystems/vmd/vmd.o 00:04:27.735 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:27.735 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:27.735 CC module/event/subsystems/iobuf/iobuf.o 00:04:27.735 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:27.993 CC module/event/subsystems/keyring/keyring.o 00:04:27.993 CC module/event/subsystems/scheduler/scheduler.o 00:04:27.993 CC module/event/subsystems/sock/sock.o 00:04:27.993 LIB libspdk_event_vhost_blk.a 00:04:27.993 LIB libspdk_event_keyring.a 00:04:27.993 LIB libspdk_event_fsdev.a 00:04:27.993 LIB libspdk_event_vmd.a 00:04:27.993 SO libspdk_event_vhost_blk.so.3.0 00:04:27.993 LIB libspdk_event_scheduler.a 00:04:27.993 LIB libspdk_event_sock.a 00:04:27.993 LIB libspdk_event_iobuf.a 00:04:27.993 SO libspdk_event_keyring.so.1.0 00:04:27.993 SO libspdk_event_fsdev.so.1.0 00:04:27.993 SO libspdk_event_vmd.so.6.0 00:04:27.993 SO libspdk_event_scheduler.so.4.0 00:04:27.993 SO libspdk_event_sock.so.5.0 00:04:27.993 SO libspdk_event_iobuf.so.3.0 00:04:27.993 SYMLINK libspdk_event_keyring.so 00:04:27.993 SYMLINK libspdk_event_vhost_blk.so 00:04:27.993 SYMLINK libspdk_event_fsdev.so 00:04:27.993 SYMLINK libspdk_event_scheduler.so 00:04:27.993 SYMLINK libspdk_event_vmd.so 00:04:27.993 SYMLINK libspdk_event_sock.so 00:04:27.993 SYMLINK libspdk_event_iobuf.so 00:04:28.251 CC module/event/subsystems/accel/accel.o 00:04:28.509 LIB libspdk_event_accel.a 00:04:28.509 SO libspdk_event_accel.so.6.0 00:04:28.509 SYMLINK libspdk_event_accel.so 00:04:28.767 CC module/event/subsystems/bdev/bdev.o 00:04:29.026 LIB libspdk_event_bdev.a 00:04:29.026 SO libspdk_event_bdev.so.6.0 00:04:29.026 SYMLINK libspdk_event_bdev.so 00:04:29.026 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:29.026 CC module/event/subsystems/nbd/nbd.o 00:04:29.026 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:29.026 CC module/event/subsystems/scsi/scsi.o 00:04:29.026 CC module/event/subsystems/ublk/ublk.o 00:04:29.284 LIB libspdk_event_nbd.a 00:04:29.284 LIB libspdk_event_scsi.a 00:04:29.284 SO libspdk_event_nbd.so.6.0 00:04:29.284 LIB libspdk_event_ublk.a 00:04:29.284 SO libspdk_event_ublk.so.3.0 00:04:29.284 SO libspdk_event_scsi.so.6.0 00:04:29.284 SYMLINK libspdk_event_nbd.so 00:04:29.284 SYMLINK libspdk_event_ublk.so 00:04:29.284 SYMLINK libspdk_event_scsi.so 00:04:29.284 LIB libspdk_event_nvmf.a 00:04:29.284 SO libspdk_event_nvmf.so.6.0 00:04:29.543 SYMLINK libspdk_event_nvmf.so 00:04:29.543 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:29.543 CC module/event/subsystems/iscsi/iscsi.o 00:04:29.543 LIB libspdk_event_vhost_scsi.a 00:04:29.543 SO libspdk_event_vhost_scsi.so.3.0 00:04:29.543 LIB libspdk_event_iscsi.a 00:04:29.543 SO libspdk_event_iscsi.so.6.0 00:04:29.803 SYMLINK libspdk_event_vhost_scsi.so 00:04:29.803 SYMLINK libspdk_event_iscsi.so 00:04:29.803 SO libspdk.so.6.0 00:04:29.803 SYMLINK libspdk.so 00:04:30.064 CXX app/trace/trace.o 00:04:30.064 CC app/spdk_nvme_perf/perf.o 00:04:30.064 CC app/trace_record/trace_record.o 00:04:30.064 CC app/spdk_lspci/spdk_lspci.o 00:04:30.064 CC app/spdk_nvme_identify/identify.o 00:04:30.064 CC app/iscsi_tgt/iscsi_tgt.o 00:04:30.064 CC app/nvmf_tgt/nvmf_main.o 00:04:30.064 CC app/spdk_tgt/spdk_tgt.o 00:04:30.064 CC examples/util/zipf/zipf.o 00:04:30.064 CC test/thread/poller_perf/poller_perf.o 00:04:30.064 LINK spdk_lspci 00:04:30.064 LINK nvmf_tgt 00:04:30.324 LINK zipf 00:04:30.324 LINK iscsi_tgt 00:04:30.324 LINK poller_perf 00:04:30.324 LINK spdk_trace_record 00:04:30.324 LINK spdk_trace 00:04:30.324 LINK spdk_tgt 00:04:30.324 CC app/spdk_nvme_discover/discovery_aer.o 00:04:30.324 CC app/spdk_top/spdk_top.o 00:04:30.324 CC examples/ioat/perf/perf.o 00:04:30.586 CC examples/ioat/verify/verify.o 00:04:30.586 CC examples/vmd/lsvmd/lsvmd.o 00:04:30.586 CC app/spdk_dd/spdk_dd.o 00:04:30.586 CC examples/vmd/led/led.o 00:04:30.586 CC test/dma/test_dma/test_dma.o 00:04:30.586 LINK spdk_nvme_discover 00:04:30.586 LINK lsvmd 00:04:30.586 LINK led 00:04:30.586 LINK ioat_perf 00:04:30.586 LINK verify 00:04:30.846 LINK spdk_dd 00:04:30.846 CC app/fio/nvme/fio_plugin.o 00:04:30.846 LINK spdk_nvme_identify 00:04:30.846 CC app/vhost/vhost.o 00:04:30.846 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:30.846 CC app/fio/bdev/fio_plugin.o 00:04:30.846 LINK spdk_nvme_perf 00:04:30.846 CC examples/idxd/perf/perf.o 00:04:31.107 LINK vhost 00:04:31.107 LINK test_dma 00:04:31.107 LINK interrupt_tgt 00:04:31.107 CC examples/sock/hello_world/hello_sock.o 00:04:31.107 CC examples/thread/thread/thread_ex.o 00:04:31.107 LINK idxd_perf 00:04:31.107 TEST_HEADER include/spdk/accel.h 00:04:31.107 TEST_HEADER include/spdk/accel_module.h 00:04:31.107 TEST_HEADER include/spdk/assert.h 00:04:31.107 TEST_HEADER include/spdk/barrier.h 00:04:31.107 TEST_HEADER include/spdk/base64.h 00:04:31.107 TEST_HEADER include/spdk/bdev.h 00:04:31.107 TEST_HEADER include/spdk/bdev_module.h 00:04:31.107 TEST_HEADER include/spdk/bdev_zone.h 00:04:31.107 TEST_HEADER include/spdk/bit_array.h 00:04:31.107 CC test/app/bdev_svc/bdev_svc.o 00:04:31.107 TEST_HEADER include/spdk/bit_pool.h 00:04:31.107 TEST_HEADER include/spdk/blob_bdev.h 00:04:31.107 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:31.107 TEST_HEADER include/spdk/blobfs.h 00:04:31.107 TEST_HEADER include/spdk/blob.h 00:04:31.107 TEST_HEADER include/spdk/conf.h 00:04:31.107 TEST_HEADER include/spdk/config.h 00:04:31.107 TEST_HEADER include/spdk/cpuset.h 00:04:31.107 TEST_HEADER include/spdk/crc16.h 00:04:31.107 TEST_HEADER include/spdk/crc32.h 00:04:31.107 TEST_HEADER include/spdk/crc64.h 00:04:31.107 TEST_HEADER include/spdk/dif.h 00:04:31.107 TEST_HEADER include/spdk/dma.h 00:04:31.107 TEST_HEADER include/spdk/endian.h 00:04:31.107 TEST_HEADER include/spdk/env_dpdk.h 00:04:31.107 TEST_HEADER include/spdk/env.h 00:04:31.107 TEST_HEADER include/spdk/event.h 00:04:31.366 TEST_HEADER include/spdk/fd_group.h 00:04:31.366 TEST_HEADER include/spdk/fd.h 00:04:31.366 TEST_HEADER include/spdk/file.h 00:04:31.366 TEST_HEADER include/spdk/fsdev.h 00:04:31.366 TEST_HEADER include/spdk/fsdev_module.h 00:04:31.366 TEST_HEADER include/spdk/ftl.h 00:04:31.366 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:31.366 TEST_HEADER include/spdk/gpt_spec.h 00:04:31.366 TEST_HEADER include/spdk/hexlify.h 00:04:31.366 TEST_HEADER include/spdk/histogram_data.h 00:04:31.366 TEST_HEADER include/spdk/idxd.h 00:04:31.366 TEST_HEADER include/spdk/idxd_spec.h 00:04:31.366 TEST_HEADER include/spdk/init.h 00:04:31.366 TEST_HEADER include/spdk/ioat.h 00:04:31.366 TEST_HEADER include/spdk/ioat_spec.h 00:04:31.366 TEST_HEADER include/spdk/iscsi_spec.h 00:04:31.366 TEST_HEADER include/spdk/json.h 00:04:31.366 TEST_HEADER include/spdk/jsonrpc.h 00:04:31.366 TEST_HEADER include/spdk/keyring.h 00:04:31.366 TEST_HEADER include/spdk/keyring_module.h 00:04:31.366 TEST_HEADER include/spdk/likely.h 00:04:31.366 TEST_HEADER include/spdk/log.h 00:04:31.366 TEST_HEADER include/spdk/lvol.h 00:04:31.366 TEST_HEADER include/spdk/md5.h 00:04:31.366 TEST_HEADER include/spdk/memory.h 00:04:31.366 TEST_HEADER include/spdk/mmio.h 00:04:31.366 TEST_HEADER include/spdk/nbd.h 00:04:31.366 TEST_HEADER include/spdk/net.h 00:04:31.366 TEST_HEADER include/spdk/notify.h 00:04:31.366 TEST_HEADER include/spdk/nvme.h 00:04:31.366 TEST_HEADER include/spdk/nvme_intel.h 00:04:31.366 CC test/event/event_perf/event_perf.o 00:04:31.366 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:31.366 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:31.366 TEST_HEADER include/spdk/nvme_spec.h 00:04:31.366 TEST_HEADER include/spdk/nvme_zns.h 00:04:31.366 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:31.366 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:31.366 TEST_HEADER include/spdk/nvmf.h 00:04:31.366 TEST_HEADER include/spdk/nvmf_spec.h 00:04:31.366 TEST_HEADER include/spdk/nvmf_transport.h 00:04:31.366 TEST_HEADER include/spdk/opal.h 00:04:31.366 TEST_HEADER include/spdk/opal_spec.h 00:04:31.366 LINK spdk_top 00:04:31.366 TEST_HEADER include/spdk/pci_ids.h 00:04:31.366 TEST_HEADER include/spdk/pipe.h 00:04:31.366 TEST_HEADER include/spdk/queue.h 00:04:31.366 TEST_HEADER include/spdk/reduce.h 00:04:31.366 TEST_HEADER include/spdk/rpc.h 00:04:31.366 TEST_HEADER include/spdk/scheduler.h 00:04:31.366 TEST_HEADER include/spdk/scsi.h 00:04:31.366 TEST_HEADER include/spdk/scsi_spec.h 00:04:31.366 LINK thread 00:04:31.366 TEST_HEADER include/spdk/sock.h 00:04:31.366 TEST_HEADER include/spdk/stdinc.h 00:04:31.366 TEST_HEADER include/spdk/string.h 00:04:31.366 TEST_HEADER include/spdk/thread.h 00:04:31.366 CC test/env/mem_callbacks/mem_callbacks.o 00:04:31.366 TEST_HEADER include/spdk/trace.h 00:04:31.366 TEST_HEADER include/spdk/trace_parser.h 00:04:31.366 TEST_HEADER include/spdk/tree.h 00:04:31.366 TEST_HEADER include/spdk/ublk.h 00:04:31.366 TEST_HEADER include/spdk/util.h 00:04:31.366 TEST_HEADER include/spdk/uuid.h 00:04:31.366 LINK bdev_svc 00:04:31.366 TEST_HEADER include/spdk/version.h 00:04:31.366 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:31.366 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:31.366 TEST_HEADER include/spdk/vhost.h 00:04:31.366 LINK hello_sock 00:04:31.366 TEST_HEADER include/spdk/vmd.h 00:04:31.366 TEST_HEADER include/spdk/xor.h 00:04:31.366 TEST_HEADER include/spdk/zipf.h 00:04:31.366 CXX test/cpp_headers/accel.o 00:04:31.366 LINK spdk_bdev 00:04:31.366 LINK spdk_nvme 00:04:31.366 LINK event_perf 00:04:31.366 CXX test/cpp_headers/accel_module.o 00:04:31.366 CXX test/cpp_headers/assert.o 00:04:31.366 CXX test/cpp_headers/barrier.o 00:04:31.627 LINK mem_callbacks 00:04:31.627 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:31.627 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:31.627 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:31.627 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:31.627 CXX test/cpp_headers/base64.o 00:04:31.627 CC test/event/reactor/reactor.o 00:04:31.627 CC examples/nvme/hello_world/hello_world.o 00:04:31.627 CC test/env/vtophys/vtophys.o 00:04:31.627 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:31.887 CC examples/accel/perf/accel_perf.o 00:04:31.887 CXX test/cpp_headers/bdev.o 00:04:31.887 LINK reactor 00:04:31.887 LINK vtophys 00:04:31.887 LINK hello_world 00:04:31.887 CC examples/blob/hello_world/hello_blob.o 00:04:31.887 CXX test/cpp_headers/bdev_module.o 00:04:31.887 LINK nvme_fuzz 00:04:31.887 LINK hello_fsdev 00:04:31.887 CC test/event/reactor_perf/reactor_perf.o 00:04:31.887 CC examples/nvme/reconnect/reconnect.o 00:04:31.887 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:31.887 LINK vhost_fuzz 00:04:32.148 LINK hello_blob 00:04:32.148 CXX test/cpp_headers/bdev_zone.o 00:04:32.148 LINK reactor_perf 00:04:32.148 LINK env_dpdk_post_init 00:04:32.148 CC test/env/memory/memory_ut.o 00:04:32.148 CC test/app/histogram_perf/histogram_perf.o 00:04:32.148 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:32.148 CXX test/cpp_headers/bit_array.o 00:04:32.417 LINK histogram_perf 00:04:32.417 LINK accel_perf 00:04:32.417 CC examples/blob/cli/blobcli.o 00:04:32.417 CC test/app/jsoncat/jsoncat.o 00:04:32.417 CC test/event/app_repeat/app_repeat.o 00:04:32.417 LINK reconnect 00:04:32.417 CXX test/cpp_headers/bit_pool.o 00:04:32.417 LINK jsoncat 00:04:32.417 LINK app_repeat 00:04:32.417 CC test/event/scheduler/scheduler.o 00:04:32.417 CXX test/cpp_headers/blob_bdev.o 00:04:32.417 CC test/app/stub/stub.o 00:04:32.675 CXX test/cpp_headers/blobfs_bdev.o 00:04:32.675 CC examples/nvme/arbitration/arbitration.o 00:04:32.675 CC examples/bdev/hello_world/hello_bdev.o 00:04:32.675 LINK scheduler 00:04:32.675 LINK stub 00:04:32.675 LINK nvme_manage 00:04:32.675 LINK blobcli 00:04:32.675 CXX test/cpp_headers/blobfs.o 00:04:32.675 LINK memory_ut 00:04:32.675 CC test/nvme/aer/aer.o 00:04:32.933 CC test/rpc_client/rpc_client_test.o 00:04:32.933 CXX test/cpp_headers/blob.o 00:04:32.933 CC test/nvme/reset/reset.o 00:04:32.933 CC test/env/pci/pci_ut.o 00:04:32.933 LINK hello_bdev 00:04:32.933 LINK arbitration 00:04:32.933 CC test/nvme/e2edp/nvme_dp.o 00:04:32.933 CC test/nvme/sgl/sgl.o 00:04:32.933 LINK aer 00:04:32.933 CXX test/cpp_headers/conf.o 00:04:32.933 LINK rpc_client_test 00:04:32.933 LINK reset 00:04:33.193 CC examples/nvme/hotplug/hotplug.o 00:04:33.193 LINK nvme_dp 00:04:33.193 CC examples/bdev/bdevperf/bdevperf.o 00:04:33.193 CXX test/cpp_headers/config.o 00:04:33.193 LINK sgl 00:04:33.193 CXX test/cpp_headers/cpuset.o 00:04:33.193 LINK pci_ut 00:04:33.193 CC test/nvme/overhead/overhead.o 00:04:33.193 CC test/nvme/err_injection/err_injection.o 00:04:33.193 CC test/accel/dif/dif.o 00:04:33.193 CXX test/cpp_headers/crc16.o 00:04:33.193 CC test/nvme/startup/startup.o 00:04:33.455 LINK iscsi_fuzz 00:04:33.455 CC test/nvme/reserve/reserve.o 00:04:33.455 LINK hotplug 00:04:33.455 LINK err_injection 00:04:33.455 CXX test/cpp_headers/crc32.o 00:04:33.455 CXX test/cpp_headers/crc64.o 00:04:33.455 LINK startup 00:04:33.455 LINK overhead 00:04:33.455 CXX test/cpp_headers/dif.o 00:04:33.455 CC test/blobfs/mkfs/mkfs.o 00:04:33.455 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:33.455 LINK reserve 00:04:33.455 CC examples/nvme/abort/abort.o 00:04:33.717 CXX test/cpp_headers/dma.o 00:04:33.717 LINK cmb_copy 00:04:33.717 LINK mkfs 00:04:33.717 CC test/nvme/simple_copy/simple_copy.o 00:04:33.717 CC test/nvme/connect_stress/connect_stress.o 00:04:33.717 CXX test/cpp_headers/endian.o 00:04:33.717 CC test/nvme/boot_partition/boot_partition.o 00:04:33.717 LINK bdevperf 00:04:33.717 CC test/lvol/esnap/esnap.o 00:04:33.975 LINK connect_stress 00:04:33.975 CXX test/cpp_headers/env_dpdk.o 00:04:33.975 LINK simple_copy 00:04:33.975 CC test/nvme/compliance/nvme_compliance.o 00:04:33.975 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:33.975 LINK boot_partition 00:04:33.975 CXX test/cpp_headers/env.o 00:04:33.975 LINK abort 00:04:33.975 LINK dif 00:04:33.975 CXX test/cpp_headers/event.o 00:04:33.975 LINK pmr_persistence 00:04:33.975 CC test/nvme/fused_ordering/fused_ordering.o 00:04:33.975 CXX test/cpp_headers/fd_group.o 00:04:34.234 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:34.234 CXX test/cpp_headers/fd.o 00:04:34.234 CC test/nvme/fdp/fdp.o 00:04:34.234 CC test/nvme/cuse/cuse.o 00:04:34.234 CXX test/cpp_headers/file.o 00:04:34.234 LINK doorbell_aers 00:04:34.234 LINK nvme_compliance 00:04:34.234 CXX test/cpp_headers/fsdev.o 00:04:34.234 CXX test/cpp_headers/fsdev_module.o 00:04:34.234 LINK fused_ordering 00:04:34.234 CC examples/nvmf/nvmf/nvmf.o 00:04:34.234 CC test/bdev/bdevio/bdevio.o 00:04:34.495 CXX test/cpp_headers/ftl.o 00:04:34.495 CXX test/cpp_headers/fuse_dispatcher.o 00:04:34.495 CXX test/cpp_headers/gpt_spec.o 00:04:34.495 CXX test/cpp_headers/hexlify.o 00:04:34.495 CXX test/cpp_headers/histogram_data.o 00:04:34.495 LINK fdp 00:04:34.495 CXX test/cpp_headers/idxd.o 00:04:34.495 CXX test/cpp_headers/idxd_spec.o 00:04:34.495 CXX test/cpp_headers/init.o 00:04:34.495 LINK nvmf 00:04:34.495 CXX test/cpp_headers/ioat.o 00:04:34.495 CXX test/cpp_headers/ioat_spec.o 00:04:34.495 CXX test/cpp_headers/iscsi_spec.o 00:04:34.806 CXX test/cpp_headers/json.o 00:04:34.806 CXX test/cpp_headers/jsonrpc.o 00:04:34.807 CXX test/cpp_headers/keyring.o 00:04:34.807 CXX test/cpp_headers/keyring_module.o 00:04:34.807 CXX test/cpp_headers/likely.o 00:04:34.807 LINK bdevio 00:04:34.807 CXX test/cpp_headers/log.o 00:04:34.807 CXX test/cpp_headers/lvol.o 00:04:34.807 CXX test/cpp_headers/md5.o 00:04:34.807 CXX test/cpp_headers/memory.o 00:04:34.807 CXX test/cpp_headers/mmio.o 00:04:34.807 CXX test/cpp_headers/nbd.o 00:04:34.807 CXX test/cpp_headers/net.o 00:04:34.807 CXX test/cpp_headers/notify.o 00:04:34.807 CXX test/cpp_headers/nvme.o 00:04:34.807 CXX test/cpp_headers/nvme_intel.o 00:04:34.807 CXX test/cpp_headers/nvme_ocssd.o 00:04:34.807 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:35.067 CXX test/cpp_headers/nvme_spec.o 00:04:35.067 CXX test/cpp_headers/nvme_zns.o 00:04:35.067 CXX test/cpp_headers/nvmf_cmd.o 00:04:35.067 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:35.067 CXX test/cpp_headers/nvmf.o 00:04:35.067 CXX test/cpp_headers/nvmf_spec.o 00:04:35.067 CXX test/cpp_headers/nvmf_transport.o 00:04:35.067 CXX test/cpp_headers/opal.o 00:04:35.067 CXX test/cpp_headers/opal_spec.o 00:04:35.067 CXX test/cpp_headers/pci_ids.o 00:04:35.067 CXX test/cpp_headers/pipe.o 00:04:35.067 CXX test/cpp_headers/queue.o 00:04:35.067 CXX test/cpp_headers/reduce.o 00:04:35.067 CXX test/cpp_headers/rpc.o 00:04:35.067 CXX test/cpp_headers/scheduler.o 00:04:35.067 CXX test/cpp_headers/scsi.o 00:04:35.326 CXX test/cpp_headers/scsi_spec.o 00:04:35.326 CXX test/cpp_headers/sock.o 00:04:35.326 LINK cuse 00:04:35.326 CXX test/cpp_headers/stdinc.o 00:04:35.326 CXX test/cpp_headers/string.o 00:04:35.326 CXX test/cpp_headers/thread.o 00:04:35.326 CXX test/cpp_headers/trace.o 00:04:35.326 CXX test/cpp_headers/trace_parser.o 00:04:35.326 CXX test/cpp_headers/tree.o 00:04:35.326 CXX test/cpp_headers/ublk.o 00:04:35.326 CXX test/cpp_headers/util.o 00:04:35.326 CXX test/cpp_headers/uuid.o 00:04:35.326 CXX test/cpp_headers/version.o 00:04:35.326 CXX test/cpp_headers/vfio_user_pci.o 00:04:35.326 CXX test/cpp_headers/vfio_user_spec.o 00:04:35.326 CXX test/cpp_headers/vhost.o 00:04:35.326 CXX test/cpp_headers/vmd.o 00:04:35.326 CXX test/cpp_headers/xor.o 00:04:35.583 CXX test/cpp_headers/zipf.o 00:04:38.891 LINK esnap 00:04:38.891 ************************************ 00:04:38.891 END TEST make 00:04:38.891 ************************************ 00:04:38.891 00:04:38.891 real 1m3.936s 00:04:38.891 user 5m8.331s 00:04:38.891 sys 0m50.948s 00:04:38.891 23:10:02 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:38.891 23:10:02 make -- common/autotest_common.sh@10 -- $ set +x 00:04:39.154 23:10:02 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:39.154 23:10:02 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:39.154 23:10:02 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:39.154 23:10:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.154 23:10:02 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:39.154 23:10:02 -- pm/common@44 -- $ pid=5808 00:04:39.154 23:10:02 -- pm/common@50 -- $ kill -TERM 5808 00:04:39.154 23:10:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.154 23:10:02 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:39.154 23:10:02 -- pm/common@44 -- $ pid=5809 00:04:39.154 23:10:02 -- pm/common@50 -- $ kill -TERM 5809 00:04:39.154 23:10:02 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:39.154 23:10:02 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:39.154 23:10:02 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:39.154 23:10:02 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:39.154 23:10:02 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:39.154 23:10:02 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:39.154 23:10:02 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.154 23:10:02 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.154 23:10:02 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.154 23:10:02 -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.154 23:10:02 -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.154 23:10:02 -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.154 23:10:02 -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.154 23:10:02 -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.154 23:10:02 -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.154 23:10:02 -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.154 23:10:02 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.154 23:10:02 -- scripts/common.sh@344 -- # case "$op" in 00:04:39.154 23:10:02 -- scripts/common.sh@345 -- # : 1 00:04:39.154 23:10:02 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.154 23:10:02 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.154 23:10:02 -- scripts/common.sh@365 -- # decimal 1 00:04:39.154 23:10:02 -- scripts/common.sh@353 -- # local d=1 00:04:39.154 23:10:02 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.154 23:10:02 -- scripts/common.sh@355 -- # echo 1 00:04:39.154 23:10:02 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.154 23:10:02 -- scripts/common.sh@366 -- # decimal 2 00:04:39.154 23:10:02 -- scripts/common.sh@353 -- # local d=2 00:04:39.154 23:10:02 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.154 23:10:02 -- scripts/common.sh@355 -- # echo 2 00:04:39.154 23:10:02 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.154 23:10:02 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.154 23:10:02 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.154 23:10:02 -- scripts/common.sh@368 -- # return 0 00:04:39.154 23:10:02 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.154 23:10:02 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:39.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.154 --rc genhtml_branch_coverage=1 00:04:39.154 --rc genhtml_function_coverage=1 00:04:39.154 --rc genhtml_legend=1 00:04:39.154 --rc geninfo_all_blocks=1 00:04:39.154 --rc geninfo_unexecuted_blocks=1 00:04:39.154 00:04:39.154 ' 00:04:39.154 23:10:02 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:39.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.154 --rc genhtml_branch_coverage=1 00:04:39.154 --rc genhtml_function_coverage=1 00:04:39.154 --rc genhtml_legend=1 00:04:39.154 --rc geninfo_all_blocks=1 00:04:39.154 --rc geninfo_unexecuted_blocks=1 00:04:39.154 00:04:39.154 ' 00:04:39.154 23:10:02 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:39.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.154 --rc genhtml_branch_coverage=1 00:04:39.154 --rc genhtml_function_coverage=1 00:04:39.154 --rc genhtml_legend=1 00:04:39.154 --rc geninfo_all_blocks=1 00:04:39.154 --rc geninfo_unexecuted_blocks=1 00:04:39.154 00:04:39.154 ' 00:04:39.154 23:10:02 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:39.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.154 --rc genhtml_branch_coverage=1 00:04:39.154 --rc genhtml_function_coverage=1 00:04:39.154 --rc genhtml_legend=1 00:04:39.154 --rc geninfo_all_blocks=1 00:04:39.154 --rc geninfo_unexecuted_blocks=1 00:04:39.154 00:04:39.154 ' 00:04:39.154 23:10:02 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:39.154 23:10:02 -- nvmf/common.sh@7 -- # uname -s 00:04:39.154 23:10:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:39.154 23:10:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:39.154 23:10:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:39.154 23:10:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:39.154 23:10:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:39.154 23:10:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:39.154 23:10:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:39.154 23:10:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:39.154 23:10:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:39.154 23:10:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:39.154 23:10:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5eabce85-047d-4605-a6cf-5b958243ebf4 00:04:39.154 23:10:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=5eabce85-047d-4605-a6cf-5b958243ebf4 00:04:39.154 23:10:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:39.154 23:10:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:39.154 23:10:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:39.154 23:10:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:39.154 23:10:02 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:39.154 23:10:02 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:39.154 23:10:02 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:39.154 23:10:02 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:39.154 23:10:02 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:39.154 23:10:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.154 23:10:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.154 23:10:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.154 23:10:02 -- paths/export.sh@5 -- # export PATH 00:04:39.154 23:10:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.154 23:10:02 -- nvmf/common.sh@51 -- # : 0 00:04:39.154 23:10:02 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:39.154 23:10:02 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:39.154 23:10:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:39.154 23:10:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:39.154 23:10:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:39.154 23:10:02 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:39.154 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:39.154 23:10:02 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:39.154 23:10:02 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:39.154 23:10:02 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:39.154 23:10:02 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:39.154 23:10:02 -- spdk/autotest.sh@32 -- # uname -s 00:04:39.154 23:10:02 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:39.154 23:10:02 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:39.154 23:10:02 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:39.155 23:10:02 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:39.155 23:10:02 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:39.155 23:10:02 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:39.155 23:10:02 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:39.155 23:10:02 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:39.155 23:10:02 -- spdk/autotest.sh@48 -- # udevadm_pid=66274 00:04:39.155 23:10:02 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:39.155 23:10:02 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:39.155 23:10:02 -- pm/common@17 -- # local monitor 00:04:39.155 23:10:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.155 23:10:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.155 23:10:02 -- pm/common@25 -- # sleep 1 00:04:39.155 23:10:02 -- pm/common@21 -- # date +%s 00:04:39.155 23:10:02 -- pm/common@21 -- # date +%s 00:04:39.155 23:10:02 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731885002 00:04:39.155 23:10:02 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731885002 00:04:39.414 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731885002_collect-vmstat.pm.log 00:04:39.414 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731885002_collect-cpu-load.pm.log 00:04:40.353 23:10:03 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:40.353 23:10:03 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:40.353 23:10:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:40.353 23:10:03 -- common/autotest_common.sh@10 -- # set +x 00:04:40.353 23:10:03 -- spdk/autotest.sh@59 -- # create_test_list 00:04:40.353 23:10:03 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:40.353 23:10:03 -- common/autotest_common.sh@10 -- # set +x 00:04:40.353 23:10:03 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:40.353 23:10:03 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:40.353 23:10:04 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:40.353 23:10:04 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:40.353 23:10:04 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:40.353 23:10:04 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:40.353 23:10:04 -- common/autotest_common.sh@1457 -- # uname 00:04:40.353 23:10:04 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:40.353 23:10:04 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:40.353 23:10:04 -- common/autotest_common.sh@1477 -- # uname 00:04:40.353 23:10:04 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:40.353 23:10:04 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:40.353 23:10:04 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:40.353 lcov: LCOV version 1.15 00:04:40.353 23:10:04 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:55.260 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:55.260 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:10.178 23:10:31 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:10.178 23:10:31 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:10.178 23:10:31 -- common/autotest_common.sh@10 -- # set +x 00:05:10.178 23:10:31 -- spdk/autotest.sh@78 -- # rm -f 00:05:10.178 23:10:31 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:10.178 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:10.178 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:10.178 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:10.178 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:10.178 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:10.178 23:10:32 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:10.178 23:10:32 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:10.178 23:10:32 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:10.178 23:10:32 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:10.178 23:10:32 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:10.178 23:10:32 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:10.178 23:10:32 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:10.178 23:10:32 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:10.178 23:10:32 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:10.178 23:10:32 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:10.178 23:10:32 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:05:10.178 23:10:32 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:10.178 23:10:32 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:10.178 23:10:32 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:10.178 23:10:32 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:10.178 23:10:32 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:05:10.179 23:10:32 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:05:10.179 23:10:32 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:10.179 23:10:32 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:10.179 23:10:32 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:10.179 23:10:32 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:05:10.179 23:10:32 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:05:10.179 23:10:32 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:10.179 23:10:32 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:10.179 23:10:32 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:10.179 23:10:32 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2c2n1 00:05:10.179 23:10:32 -- common/autotest_common.sh@1650 -- # local device=nvme2c2n1 00:05:10.179 23:10:32 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:05:10.179 23:10:32 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:10.179 23:10:32 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:10.179 23:10:32 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:05:10.179 23:10:32 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:05:10.179 23:10:32 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:10.179 23:10:32 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:10.179 23:10:32 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:10.179 23:10:32 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:05:10.179 23:10:32 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:05:10.179 23:10:32 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:10.179 23:10:32 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:10.179 23:10:32 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:10.179 23:10:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:10.179 23:10:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:10.179 23:10:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:10.179 23:10:32 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:10.179 23:10:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:10.179 No valid GPT data, bailing 00:05:10.179 23:10:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:10.179 23:10:32 -- scripts/common.sh@394 -- # pt= 00:05:10.179 23:10:32 -- scripts/common.sh@395 -- # return 1 00:05:10.179 23:10:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:10.179 1+0 records in 00:05:10.179 1+0 records out 00:05:10.179 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00695395 s, 151 MB/s 00:05:10.179 23:10:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:10.179 23:10:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:10.179 23:10:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:10.179 23:10:32 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:10.179 23:10:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:10.179 No valid GPT data, bailing 00:05:10.179 23:10:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:10.179 23:10:32 -- scripts/common.sh@394 -- # pt= 00:05:10.179 23:10:32 -- scripts/common.sh@395 -- # return 1 00:05:10.179 23:10:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:10.179 1+0 records in 00:05:10.179 1+0 records out 00:05:10.179 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0051201 s, 205 MB/s 00:05:10.179 23:10:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:10.179 23:10:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:10.179 23:10:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:10.179 23:10:32 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:10.179 23:10:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:10.179 No valid GPT data, bailing 00:05:10.179 23:10:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:10.179 23:10:32 -- scripts/common.sh@394 -- # pt= 00:05:10.179 23:10:32 -- scripts/common.sh@395 -- # return 1 00:05:10.179 23:10:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:10.179 1+0 records in 00:05:10.179 1+0 records out 00:05:10.179 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0052977 s, 198 MB/s 00:05:10.179 23:10:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:10.179 23:10:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:10.179 23:10:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:10.179 23:10:32 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:10.179 23:10:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:10.179 No valid GPT data, bailing 00:05:10.179 23:10:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:10.179 23:10:32 -- scripts/common.sh@394 -- # pt= 00:05:10.179 23:10:32 -- scripts/common.sh@395 -- # return 1 00:05:10.179 23:10:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:10.179 1+0 records in 00:05:10.179 1+0 records out 00:05:10.179 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00551939 s, 190 MB/s 00:05:10.179 23:10:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:10.179 23:10:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:10.179 23:10:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:10.179 23:10:32 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:10.179 23:10:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:10.179 No valid GPT data, bailing 00:05:10.179 23:10:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:10.179 23:10:32 -- scripts/common.sh@394 -- # pt= 00:05:10.179 23:10:32 -- scripts/common.sh@395 -- # return 1 00:05:10.179 23:10:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:10.179 1+0 records in 00:05:10.179 1+0 records out 00:05:10.179 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00429973 s, 244 MB/s 00:05:10.179 23:10:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:10.179 23:10:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:10.179 23:10:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:10.179 23:10:32 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:10.179 23:10:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:10.179 No valid GPT data, bailing 00:05:10.179 23:10:33 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:10.179 23:10:33 -- scripts/common.sh@394 -- # pt= 00:05:10.179 23:10:33 -- scripts/common.sh@395 -- # return 1 00:05:10.179 23:10:33 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:10.179 1+0 records in 00:05:10.179 1+0 records out 00:05:10.179 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0281292 s, 37.3 MB/s 00:05:10.179 23:10:33 -- spdk/autotest.sh@105 -- # sync 00:05:10.179 23:10:33 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:10.179 23:10:33 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:10.179 23:10:33 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:11.116 23:10:34 -- spdk/autotest.sh@111 -- # uname -s 00:05:11.116 23:10:34 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:11.116 23:10:34 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:11.116 23:10:34 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:11.683 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:11.941 Hugepages 00:05:11.941 node hugesize free / total 00:05:11.941 node0 1048576kB 0 / 0 00:05:11.941 node0 2048kB 0 / 0 00:05:11.941 00:05:11.941 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:12.200 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:12.200 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:12.200 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:12.200 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:12.458 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:05:12.458 23:10:36 -- spdk/autotest.sh@117 -- # uname -s 00:05:12.458 23:10:36 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:12.458 23:10:36 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:12.458 23:10:36 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:12.716 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:13.282 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:13.282 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:13.282 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:13.282 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:13.541 23:10:37 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:14.478 23:10:38 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:14.478 23:10:38 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:14.478 23:10:38 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:14.478 23:10:38 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:14.478 23:10:38 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:14.478 23:10:38 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:14.478 23:10:38 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:14.478 23:10:38 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:14.478 23:10:38 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:14.478 23:10:38 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:14.478 23:10:38 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:14.478 23:10:38 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:14.737 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:14.998 Waiting for block devices as requested 00:05:14.998 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:14.998 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:15.259 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:15.259 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:20.566 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:20.566 23:10:44 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:20.566 23:10:44 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:20.566 23:10:44 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:20.566 23:10:44 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:20.566 23:10:44 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:20.566 23:10:44 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:20.566 23:10:44 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:20.566 23:10:44 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:20.566 23:10:44 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:20.566 23:10:44 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:20.566 23:10:44 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:20.566 23:10:44 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:20.566 23:10:44 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:20.566 23:10:44 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:20.566 23:10:44 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:20.566 23:10:44 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:20.566 23:10:44 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:20.566 23:10:44 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:20.567 23:10:44 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:20.567 23:10:44 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:20.567 23:10:44 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:20.567 23:10:44 -- common/autotest_common.sh@1543 -- # continue 00:05:20.567 23:10:44 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:20.567 23:10:44 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:20.567 23:10:44 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:20.567 23:10:44 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:20.567 23:10:44 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:20.567 23:10:44 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:20.567 23:10:44 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:20.567 23:10:44 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:20.567 23:10:44 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:20.567 23:10:44 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:20.567 23:10:44 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:20.567 23:10:44 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:20.567 23:10:44 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:20.567 23:10:44 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:20.567 23:10:44 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:20.567 23:10:44 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:20.567 23:10:44 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:20.567 23:10:44 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:20.567 23:10:44 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:20.567 23:10:44 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:20.567 23:10:44 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:20.567 23:10:44 -- common/autotest_common.sh@1543 -- # continue 00:05:20.567 23:10:44 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:20.568 23:10:44 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:20.568 23:10:44 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:20.568 23:10:44 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:05:20.568 23:10:44 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:20.568 23:10:44 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:20.568 23:10:44 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:20.568 23:10:44 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:05:20.568 23:10:44 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:05:20.568 23:10:44 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:05:20.568 23:10:44 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:20.568 23:10:44 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:05:20.568 23:10:44 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:20.568 23:10:44 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:20.568 23:10:44 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:20.568 23:10:44 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:20.568 23:10:44 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:05:20.568 23:10:44 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:20.568 23:10:44 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:20.568 23:10:44 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:20.568 23:10:44 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:20.568 23:10:44 -- common/autotest_common.sh@1543 -- # continue 00:05:20.568 23:10:44 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:20.568 23:10:44 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:20.568 23:10:44 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:05:20.568 23:10:44 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:20.568 23:10:44 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:20.569 23:10:44 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:20.569 23:10:44 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:20.569 23:10:44 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:05:20.569 23:10:44 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:05:20.569 23:10:44 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:05:20.569 23:10:44 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:05:20.569 23:10:44 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:20.569 23:10:44 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:20.569 23:10:44 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:20.569 23:10:44 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:20.569 23:10:44 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:20.569 23:10:44 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:05:20.569 23:10:44 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:20.569 23:10:44 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:20.569 23:10:44 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:20.569 23:10:44 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:20.569 23:10:44 -- common/autotest_common.sh@1543 -- # continue 00:05:20.569 23:10:44 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:20.569 23:10:44 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:20.569 23:10:44 -- common/autotest_common.sh@10 -- # set +x 00:05:20.569 23:10:44 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:20.569 23:10:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:20.569 23:10:44 -- common/autotest_common.sh@10 -- # set +x 00:05:20.569 23:10:44 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:21.148 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:21.410 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:21.669 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:21.669 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:21.669 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:21.669 23:10:45 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:21.669 23:10:45 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:21.669 23:10:45 -- common/autotest_common.sh@10 -- # set +x 00:05:21.669 23:10:45 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:21.669 23:10:45 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:21.669 23:10:45 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:21.669 23:10:45 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:21.669 23:10:45 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:21.669 23:10:45 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:21.669 23:10:45 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:21.669 23:10:45 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:21.669 23:10:45 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:21.669 23:10:45 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:21.669 23:10:45 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:21.669 23:10:45 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:21.669 23:10:45 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:21.669 23:10:45 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:21.669 23:10:45 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:21.669 23:10:45 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:21.669 23:10:45 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:21.669 23:10:45 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:21.669 23:10:45 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:21.669 23:10:45 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:21.669 23:10:45 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:21.931 23:10:45 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:21.931 23:10:45 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:21.931 23:10:45 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:21.931 23:10:45 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:21.931 23:10:45 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:21.931 23:10:45 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:21.931 23:10:45 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:21.931 23:10:45 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:21.931 23:10:45 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:21.931 23:10:45 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:21.931 23:10:45 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:21.931 23:10:45 -- common/autotest_common.sh@1572 -- # return 0 00:05:21.931 23:10:45 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:21.931 23:10:45 -- common/autotest_common.sh@1580 -- # return 0 00:05:21.931 23:10:45 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:21.931 23:10:45 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:21.931 23:10:45 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:21.931 23:10:45 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:21.931 23:10:45 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:21.931 23:10:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:21.931 23:10:45 -- common/autotest_common.sh@10 -- # set +x 00:05:21.931 23:10:45 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:21.931 23:10:45 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:21.931 23:10:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.931 23:10:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.931 23:10:45 -- common/autotest_common.sh@10 -- # set +x 00:05:21.931 ************************************ 00:05:21.931 START TEST env 00:05:21.931 ************************************ 00:05:21.931 23:10:45 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:21.931 * Looking for test storage... 00:05:21.931 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:21.931 23:10:45 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:21.931 23:10:45 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:21.931 23:10:45 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:21.931 23:10:45 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:21.931 23:10:45 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.931 23:10:45 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.931 23:10:45 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.931 23:10:45 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.931 23:10:45 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.931 23:10:45 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.931 23:10:45 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.931 23:10:45 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.931 23:10:45 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.931 23:10:45 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.931 23:10:45 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.931 23:10:45 env -- scripts/common.sh@344 -- # case "$op" in 00:05:21.931 23:10:45 env -- scripts/common.sh@345 -- # : 1 00:05:21.931 23:10:45 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.931 23:10:45 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.931 23:10:45 env -- scripts/common.sh@365 -- # decimal 1 00:05:21.931 23:10:45 env -- scripts/common.sh@353 -- # local d=1 00:05:21.931 23:10:45 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.931 23:10:45 env -- scripts/common.sh@355 -- # echo 1 00:05:21.931 23:10:45 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.931 23:10:45 env -- scripts/common.sh@366 -- # decimal 2 00:05:21.931 23:10:45 env -- scripts/common.sh@353 -- # local d=2 00:05:21.931 23:10:45 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.931 23:10:45 env -- scripts/common.sh@355 -- # echo 2 00:05:21.931 23:10:45 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.931 23:10:45 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.931 23:10:45 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.931 23:10:45 env -- scripts/common.sh@368 -- # return 0 00:05:21.931 23:10:45 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.931 23:10:45 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:21.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.931 --rc genhtml_branch_coverage=1 00:05:21.931 --rc genhtml_function_coverage=1 00:05:21.931 --rc genhtml_legend=1 00:05:21.931 --rc geninfo_all_blocks=1 00:05:21.931 --rc geninfo_unexecuted_blocks=1 00:05:21.931 00:05:21.931 ' 00:05:21.931 23:10:45 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:21.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.931 --rc genhtml_branch_coverage=1 00:05:21.931 --rc genhtml_function_coverage=1 00:05:21.931 --rc genhtml_legend=1 00:05:21.931 --rc geninfo_all_blocks=1 00:05:21.931 --rc geninfo_unexecuted_blocks=1 00:05:21.931 00:05:21.931 ' 00:05:21.931 23:10:45 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:21.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.931 --rc genhtml_branch_coverage=1 00:05:21.931 --rc genhtml_function_coverage=1 00:05:21.931 --rc genhtml_legend=1 00:05:21.931 --rc geninfo_all_blocks=1 00:05:21.931 --rc geninfo_unexecuted_blocks=1 00:05:21.931 00:05:21.931 ' 00:05:21.931 23:10:45 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:21.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.931 --rc genhtml_branch_coverage=1 00:05:21.931 --rc genhtml_function_coverage=1 00:05:21.931 --rc genhtml_legend=1 00:05:21.931 --rc geninfo_all_blocks=1 00:05:21.931 --rc geninfo_unexecuted_blocks=1 00:05:21.931 00:05:21.931 ' 00:05:21.931 23:10:45 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:21.931 23:10:45 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.931 23:10:45 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.931 23:10:45 env -- common/autotest_common.sh@10 -- # set +x 00:05:21.931 ************************************ 00:05:21.931 START TEST env_memory 00:05:21.931 ************************************ 00:05:21.931 23:10:45 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:21.931 00:05:21.931 00:05:21.931 CUnit - A unit testing framework for C - Version 2.1-3 00:05:21.932 http://cunit.sourceforge.net/ 00:05:21.932 00:05:21.932 00:05:21.932 Suite: memory 00:05:22.193 Test: alloc and free memory map ...[2024-11-17 23:10:45.755630] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:22.193 passed 00:05:22.193 Test: mem map translation ...[2024-11-17 23:10:45.794757] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:22.193 [2024-11-17 23:10:45.794913] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:22.193 [2024-11-17 23:10:45.795046] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:22.193 [2024-11-17 23:10:45.795088] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:22.193 passed 00:05:22.193 Test: mem map registration ...[2024-11-17 23:10:45.863465] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:22.193 [2024-11-17 23:10:45.863603] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:22.193 passed 00:05:22.193 Test: mem map adjacent registrations ...passed 00:05:22.193 00:05:22.193 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.193 suites 1 1 n/a 0 0 00:05:22.193 tests 4 4 4 0 0 00:05:22.193 asserts 152 152 152 0 n/a 00:05:22.193 00:05:22.193 Elapsed time = 0.233 seconds 00:05:22.193 00:05:22.193 real 0m0.268s 00:05:22.193 user 0m0.240s 00:05:22.193 sys 0m0.020s 00:05:22.193 23:10:45 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.193 23:10:45 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:22.193 ************************************ 00:05:22.193 END TEST env_memory 00:05:22.193 ************************************ 00:05:22.455 23:10:46 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:22.455 23:10:46 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.455 23:10:46 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.455 23:10:46 env -- common/autotest_common.sh@10 -- # set +x 00:05:22.455 ************************************ 00:05:22.455 START TEST env_vtophys 00:05:22.455 ************************************ 00:05:22.455 23:10:46 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:22.455 EAL: lib.eal log level changed from notice to debug 00:05:22.455 EAL: Detected lcore 0 as core 0 on socket 0 00:05:22.455 EAL: Detected lcore 1 as core 0 on socket 0 00:05:22.455 EAL: Detected lcore 2 as core 0 on socket 0 00:05:22.455 EAL: Detected lcore 3 as core 0 on socket 0 00:05:22.455 EAL: Detected lcore 4 as core 0 on socket 0 00:05:22.455 EAL: Detected lcore 5 as core 0 on socket 0 00:05:22.455 EAL: Detected lcore 6 as core 0 on socket 0 00:05:22.455 EAL: Detected lcore 7 as core 0 on socket 0 00:05:22.455 EAL: Detected lcore 8 as core 0 on socket 0 00:05:22.455 EAL: Detected lcore 9 as core 0 on socket 0 00:05:22.455 EAL: Maximum logical cores by configuration: 128 00:05:22.455 EAL: Detected CPU lcores: 10 00:05:22.455 EAL: Detected NUMA nodes: 1 00:05:22.455 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:22.455 EAL: Detected shared linkage of DPDK 00:05:22.455 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:22.455 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:22.455 EAL: Registered [vdev] bus. 00:05:22.455 EAL: bus.vdev log level changed from disabled to notice 00:05:22.455 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:22.455 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:22.455 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:22.455 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:22.455 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:22.455 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:22.455 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:22.455 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:22.455 EAL: No shared files mode enabled, IPC will be disabled 00:05:22.455 EAL: No shared files mode enabled, IPC is disabled 00:05:22.455 EAL: Selected IOVA mode 'PA' 00:05:22.455 EAL: Probing VFIO support... 00:05:22.455 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:22.455 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:22.455 EAL: Ask a virtual area of 0x2e000 bytes 00:05:22.455 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:22.455 EAL: Setting up physically contiguous memory... 00:05:22.455 EAL: Setting maximum number of open files to 524288 00:05:22.455 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:22.455 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:22.455 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.455 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:22.455 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:22.455 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.455 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:22.455 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:22.455 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.455 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:22.455 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:22.455 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.455 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:22.455 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:22.455 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.455 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:22.455 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:22.455 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.455 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:22.455 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:22.455 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.455 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:22.455 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:22.455 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.455 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:22.455 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:22.455 EAL: Hugepages will be freed exactly as allocated. 00:05:22.455 EAL: No shared files mode enabled, IPC is disabled 00:05:22.455 EAL: No shared files mode enabled, IPC is disabled 00:05:22.455 EAL: TSC frequency is ~2600000 KHz 00:05:22.455 EAL: Main lcore 0 is ready (tid=7fb6c0e56a40;cpuset=[0]) 00:05:22.455 EAL: Trying to obtain current memory policy. 00:05:22.455 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.455 EAL: Restoring previous memory policy: 0 00:05:22.455 EAL: request: mp_malloc_sync 00:05:22.455 EAL: No shared files mode enabled, IPC is disabled 00:05:22.455 EAL: Heap on socket 0 was expanded by 2MB 00:05:22.455 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:22.455 EAL: No shared files mode enabled, IPC is disabled 00:05:22.455 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:22.455 EAL: Mem event callback 'spdk:(nil)' registered 00:05:22.455 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:22.455 00:05:22.455 00:05:22.455 CUnit - A unit testing framework for C - Version 2.1-3 00:05:22.455 http://cunit.sourceforge.net/ 00:05:22.455 00:05:22.455 00:05:22.455 Suite: components_suite 00:05:23.029 Test: vtophys_malloc_test ...passed 00:05:23.029 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:23.029 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.029 EAL: Restoring previous memory policy: 4 00:05:23.029 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.029 EAL: request: mp_malloc_sync 00:05:23.029 EAL: No shared files mode enabled, IPC is disabled 00:05:23.029 EAL: Heap on socket 0 was expanded by 4MB 00:05:23.029 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.029 EAL: request: mp_malloc_sync 00:05:23.029 EAL: No shared files mode enabled, IPC is disabled 00:05:23.029 EAL: Heap on socket 0 was shrunk by 4MB 00:05:23.029 EAL: Trying to obtain current memory policy. 00:05:23.029 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.029 EAL: Restoring previous memory policy: 4 00:05:23.029 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.029 EAL: request: mp_malloc_sync 00:05:23.029 EAL: No shared files mode enabled, IPC is disabled 00:05:23.029 EAL: Heap on socket 0 was expanded by 6MB 00:05:23.029 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.029 EAL: request: mp_malloc_sync 00:05:23.029 EAL: No shared files mode enabled, IPC is disabled 00:05:23.029 EAL: Heap on socket 0 was shrunk by 6MB 00:05:23.029 EAL: Trying to obtain current memory policy. 00:05:23.029 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.029 EAL: Restoring previous memory policy: 4 00:05:23.029 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.029 EAL: request: mp_malloc_sync 00:05:23.029 EAL: No shared files mode enabled, IPC is disabled 00:05:23.029 EAL: Heap on socket 0 was expanded by 10MB 00:05:23.029 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.029 EAL: request: mp_malloc_sync 00:05:23.029 EAL: No shared files mode enabled, IPC is disabled 00:05:23.029 EAL: Heap on socket 0 was shrunk by 10MB 00:05:23.029 EAL: Trying to obtain current memory policy. 00:05:23.029 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.029 EAL: Restoring previous memory policy: 4 00:05:23.029 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.029 EAL: request: mp_malloc_sync 00:05:23.029 EAL: No shared files mode enabled, IPC is disabled 00:05:23.029 EAL: Heap on socket 0 was expanded by 18MB 00:05:23.029 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.029 EAL: request: mp_malloc_sync 00:05:23.029 EAL: No shared files mode enabled, IPC is disabled 00:05:23.029 EAL: Heap on socket 0 was shrunk by 18MB 00:05:23.029 EAL: Trying to obtain current memory policy. 00:05:23.029 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.029 EAL: Restoring previous memory policy: 4 00:05:23.029 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.029 EAL: request: mp_malloc_sync 00:05:23.029 EAL: No shared files mode enabled, IPC is disabled 00:05:23.029 EAL: Heap on socket 0 was expanded by 34MB 00:05:23.029 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.029 EAL: request: mp_malloc_sync 00:05:23.029 EAL: No shared files mode enabled, IPC is disabled 00:05:23.029 EAL: Heap on socket 0 was shrunk by 34MB 00:05:23.029 EAL: Trying to obtain current memory policy. 00:05:23.029 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.029 EAL: Restoring previous memory policy: 4 00:05:23.029 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.029 EAL: request: mp_malloc_sync 00:05:23.029 EAL: No shared files mode enabled, IPC is disabled 00:05:23.029 EAL: Heap on socket 0 was expanded by 66MB 00:05:23.029 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.029 EAL: request: mp_malloc_sync 00:05:23.029 EAL: No shared files mode enabled, IPC is disabled 00:05:23.029 EAL: Heap on socket 0 was shrunk by 66MB 00:05:23.029 EAL: Trying to obtain current memory policy. 00:05:23.029 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.029 EAL: Restoring previous memory policy: 4 00:05:23.029 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.029 EAL: request: mp_malloc_sync 00:05:23.029 EAL: No shared files mode enabled, IPC is disabled 00:05:23.029 EAL: Heap on socket 0 was expanded by 130MB 00:05:23.029 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.029 EAL: request: mp_malloc_sync 00:05:23.029 EAL: No shared files mode enabled, IPC is disabled 00:05:23.029 EAL: Heap on socket 0 was shrunk by 130MB 00:05:23.029 EAL: Trying to obtain current memory policy. 00:05:23.029 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.029 EAL: Restoring previous memory policy: 4 00:05:23.029 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.029 EAL: request: mp_malloc_sync 00:05:23.029 EAL: No shared files mode enabled, IPC is disabled 00:05:23.029 EAL: Heap on socket 0 was expanded by 258MB 00:05:23.029 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.029 EAL: request: mp_malloc_sync 00:05:23.029 EAL: No shared files mode enabled, IPC is disabled 00:05:23.029 EAL: Heap on socket 0 was shrunk by 258MB 00:05:23.029 EAL: Trying to obtain current memory policy. 00:05:23.029 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.290 EAL: Restoring previous memory policy: 4 00:05:23.290 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.290 EAL: request: mp_malloc_sync 00:05:23.290 EAL: No shared files mode enabled, IPC is disabled 00:05:23.290 EAL: Heap on socket 0 was expanded by 514MB 00:05:23.290 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.290 EAL: request: mp_malloc_sync 00:05:23.290 EAL: No shared files mode enabled, IPC is disabled 00:05:23.290 EAL: Heap on socket 0 was shrunk by 514MB 00:05:23.290 EAL: Trying to obtain current memory policy. 00:05:23.290 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.551 EAL: Restoring previous memory policy: 4 00:05:23.551 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.551 EAL: request: mp_malloc_sync 00:05:23.551 EAL: No shared files mode enabled, IPC is disabled 00:05:23.551 EAL: Heap on socket 0 was expanded by 1026MB 00:05:23.551 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.812 passed 00:05:23.812 00:05:23.812 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.812 suites 1 1 n/a 0 0 00:05:23.812 tests 2 2 2 0 0 00:05:23.812 asserts 5386 5386 5386 0 n/a 00:05:23.812 00:05:23.812 Elapsed time = 1.204 seconds 00:05:23.812 EAL: request: mp_malloc_sync 00:05:23.812 EAL: No shared files mode enabled, IPC is disabled 00:05:23.812 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:23.812 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.812 EAL: request: mp_malloc_sync 00:05:23.812 EAL: No shared files mode enabled, IPC is disabled 00:05:23.812 EAL: Heap on socket 0 was shrunk by 2MB 00:05:23.812 EAL: No shared files mode enabled, IPC is disabled 00:05:23.812 EAL: No shared files mode enabled, IPC is disabled 00:05:23.812 EAL: No shared files mode enabled, IPC is disabled 00:05:23.812 00:05:23.812 real 0m1.439s 00:05:23.812 user 0m0.575s 00:05:23.812 sys 0m0.719s 00:05:23.812 ************************************ 00:05:23.812 END TEST env_vtophys 00:05:23.812 ************************************ 00:05:23.812 23:10:47 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.812 23:10:47 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:23.812 23:10:47 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:23.812 23:10:47 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.812 23:10:47 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.812 23:10:47 env -- common/autotest_common.sh@10 -- # set +x 00:05:23.812 ************************************ 00:05:23.812 START TEST env_pci 00:05:23.812 ************************************ 00:05:23.812 23:10:47 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:23.812 00:05:23.812 00:05:23.812 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.812 http://cunit.sourceforge.net/ 00:05:23.812 00:05:23.812 00:05:23.812 Suite: pci 00:05:23.812 Test: pci_hook ...[2024-11-17 23:10:47.557823] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 68986 has claimed it 00:05:23.812 passed 00:05:23.812 00:05:23.812 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.812 suites 1 1 n/a 0 0 00:05:23.812 tests 1 1 1 0 0 00:05:23.812 asserts 25 25 25 0 n/a 00:05:23.812 00:05:23.812 Elapsed time = 0.003 seconds 00:05:23.812 EAL: Cannot find device (10000:00:01.0) 00:05:23.812 EAL: Failed to attach device on primary process 00:05:23.812 00:05:23.812 real 0m0.057s 00:05:23.812 user 0m0.026s 00:05:23.812 sys 0m0.030s 00:05:23.812 23:10:47 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.812 ************************************ 00:05:23.812 END TEST env_pci 00:05:23.812 23:10:47 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:23.812 ************************************ 00:05:24.073 23:10:47 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:24.073 23:10:47 env -- env/env.sh@15 -- # uname 00:05:24.073 23:10:47 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:24.073 23:10:47 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:24.073 23:10:47 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:24.073 23:10:47 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:24.073 23:10:47 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.073 23:10:47 env -- common/autotest_common.sh@10 -- # set +x 00:05:24.073 ************************************ 00:05:24.073 START TEST env_dpdk_post_init 00:05:24.073 ************************************ 00:05:24.073 23:10:47 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:24.073 EAL: Detected CPU lcores: 10 00:05:24.073 EAL: Detected NUMA nodes: 1 00:05:24.073 EAL: Detected shared linkage of DPDK 00:05:24.073 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:24.073 EAL: Selected IOVA mode 'PA' 00:05:24.073 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:24.073 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:24.073 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:24.073 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:24.073 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:24.073 Starting DPDK initialization... 00:05:24.073 Starting SPDK post initialization... 00:05:24.073 SPDK NVMe probe 00:05:24.073 Attaching to 0000:00:10.0 00:05:24.073 Attaching to 0000:00:11.0 00:05:24.073 Attaching to 0000:00:12.0 00:05:24.073 Attaching to 0000:00:13.0 00:05:24.073 Attached to 0000:00:10.0 00:05:24.073 Attached to 0000:00:11.0 00:05:24.073 Attached to 0000:00:13.0 00:05:24.073 Attached to 0000:00:12.0 00:05:24.074 Cleaning up... 00:05:24.335 00:05:24.335 real 0m0.233s 00:05:24.335 user 0m0.063s 00:05:24.335 sys 0m0.073s 00:05:24.335 ************************************ 00:05:24.335 23:10:47 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.335 23:10:47 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:24.335 END TEST env_dpdk_post_init 00:05:24.335 ************************************ 00:05:24.335 23:10:47 env -- env/env.sh@26 -- # uname 00:05:24.335 23:10:47 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:24.335 23:10:47 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:24.335 23:10:47 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.335 23:10:47 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.335 23:10:47 env -- common/autotest_common.sh@10 -- # set +x 00:05:24.335 ************************************ 00:05:24.335 START TEST env_mem_callbacks 00:05:24.335 ************************************ 00:05:24.335 23:10:47 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:24.335 EAL: Detected CPU lcores: 10 00:05:24.335 EAL: Detected NUMA nodes: 1 00:05:24.335 EAL: Detected shared linkage of DPDK 00:05:24.335 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:24.335 EAL: Selected IOVA mode 'PA' 00:05:24.335 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:24.335 00:05:24.335 00:05:24.335 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.335 http://cunit.sourceforge.net/ 00:05:24.335 00:05:24.335 00:05:24.335 Suite: memory 00:05:24.335 Test: test ... 00:05:24.335 register 0x200000200000 2097152 00:05:24.335 malloc 3145728 00:05:24.335 register 0x200000400000 4194304 00:05:24.335 buf 0x200000500000 len 3145728 PASSED 00:05:24.335 malloc 64 00:05:24.335 buf 0x2000004fff40 len 64 PASSED 00:05:24.335 malloc 4194304 00:05:24.335 register 0x200000800000 6291456 00:05:24.335 buf 0x200000a00000 len 4194304 PASSED 00:05:24.335 free 0x200000500000 3145728 00:05:24.335 free 0x2000004fff40 64 00:05:24.335 unregister 0x200000400000 4194304 PASSED 00:05:24.335 free 0x200000a00000 4194304 00:05:24.335 unregister 0x200000800000 6291456 PASSED 00:05:24.335 malloc 8388608 00:05:24.335 register 0x200000400000 10485760 00:05:24.335 buf 0x200000600000 len 8388608 PASSED 00:05:24.335 free 0x200000600000 8388608 00:05:24.335 unregister 0x200000400000 10485760 PASSED 00:05:24.335 passed 00:05:24.336 00:05:24.336 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.336 suites 1 1 n/a 0 0 00:05:24.336 tests 1 1 1 0 0 00:05:24.336 asserts 15 15 15 0 n/a 00:05:24.336 00:05:24.336 Elapsed time = 0.010 seconds 00:05:24.336 00:05:24.336 real 0m0.156s 00:05:24.336 user 0m0.019s 00:05:24.336 sys 0m0.036s 00:05:24.336 23:10:48 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.336 ************************************ 00:05:24.336 END TEST env_mem_callbacks 00:05:24.336 ************************************ 00:05:24.336 23:10:48 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:24.596 00:05:24.596 real 0m2.640s 00:05:24.596 user 0m1.065s 00:05:24.596 sys 0m1.113s 00:05:24.596 23:10:48 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.596 ************************************ 00:05:24.596 END TEST env 00:05:24.596 ************************************ 00:05:24.596 23:10:48 env -- common/autotest_common.sh@10 -- # set +x 00:05:24.596 23:10:48 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:24.596 23:10:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.596 23:10:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.596 23:10:48 -- common/autotest_common.sh@10 -- # set +x 00:05:24.596 ************************************ 00:05:24.596 START TEST rpc 00:05:24.596 ************************************ 00:05:24.596 23:10:48 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:24.596 * Looking for test storage... 00:05:24.596 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:24.596 23:10:48 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:24.596 23:10:48 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:24.596 23:10:48 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:24.596 23:10:48 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:24.596 23:10:48 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.596 23:10:48 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.596 23:10:48 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.596 23:10:48 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.596 23:10:48 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.596 23:10:48 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.596 23:10:48 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.596 23:10:48 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.596 23:10:48 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.596 23:10:48 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.596 23:10:48 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.596 23:10:48 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:24.596 23:10:48 rpc -- scripts/common.sh@345 -- # : 1 00:05:24.596 23:10:48 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.596 23:10:48 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.596 23:10:48 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:24.596 23:10:48 rpc -- scripts/common.sh@353 -- # local d=1 00:05:24.596 23:10:48 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.596 23:10:48 rpc -- scripts/common.sh@355 -- # echo 1 00:05:24.596 23:10:48 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.597 23:10:48 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:24.597 23:10:48 rpc -- scripts/common.sh@353 -- # local d=2 00:05:24.597 23:10:48 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.597 23:10:48 rpc -- scripts/common.sh@355 -- # echo 2 00:05:24.597 23:10:48 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.597 23:10:48 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.597 23:10:48 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.597 23:10:48 rpc -- scripts/common.sh@368 -- # return 0 00:05:24.597 23:10:48 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.597 23:10:48 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:24.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.597 --rc genhtml_branch_coverage=1 00:05:24.597 --rc genhtml_function_coverage=1 00:05:24.597 --rc genhtml_legend=1 00:05:24.597 --rc geninfo_all_blocks=1 00:05:24.597 --rc geninfo_unexecuted_blocks=1 00:05:24.597 00:05:24.597 ' 00:05:24.597 23:10:48 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:24.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.597 --rc genhtml_branch_coverage=1 00:05:24.597 --rc genhtml_function_coverage=1 00:05:24.597 --rc genhtml_legend=1 00:05:24.597 --rc geninfo_all_blocks=1 00:05:24.597 --rc geninfo_unexecuted_blocks=1 00:05:24.597 00:05:24.597 ' 00:05:24.597 23:10:48 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:24.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.597 --rc genhtml_branch_coverage=1 00:05:24.597 --rc genhtml_function_coverage=1 00:05:24.597 --rc genhtml_legend=1 00:05:24.597 --rc geninfo_all_blocks=1 00:05:24.597 --rc geninfo_unexecuted_blocks=1 00:05:24.597 00:05:24.597 ' 00:05:24.597 23:10:48 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:24.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.597 --rc genhtml_branch_coverage=1 00:05:24.597 --rc genhtml_function_coverage=1 00:05:24.597 --rc genhtml_legend=1 00:05:24.597 --rc geninfo_all_blocks=1 00:05:24.597 --rc geninfo_unexecuted_blocks=1 00:05:24.597 00:05:24.597 ' 00:05:24.597 23:10:48 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69113 00:05:24.597 23:10:48 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.597 23:10:48 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69113 00:05:24.597 23:10:48 rpc -- common/autotest_common.sh@835 -- # '[' -z 69113 ']' 00:05:24.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.597 23:10:48 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.597 23:10:48 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.597 23:10:48 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:24.597 23:10:48 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.597 23:10:48 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.597 23:10:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.856 [2024-11-17 23:10:48.469697] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:24.856 [2024-11-17 23:10:48.469855] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69113 ] 00:05:24.856 [2024-11-17 23:10:48.617511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.856 [2024-11-17 23:10:48.636026] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:24.856 [2024-11-17 23:10:48.636074] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69113' to capture a snapshot of events at runtime. 00:05:24.856 [2024-11-17 23:10:48.636086] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:24.856 [2024-11-17 23:10:48.636097] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:24.856 [2024-11-17 23:10:48.636107] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69113 for offline analysis/debug. 00:05:24.856 [2024-11-17 23:10:48.636400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.791 23:10:49 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.791 23:10:49 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:25.791 23:10:49 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:25.791 23:10:49 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:25.791 23:10:49 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:25.791 23:10:49 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:25.791 23:10:49 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.791 23:10:49 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.791 23:10:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.791 ************************************ 00:05:25.791 START TEST rpc_integrity 00:05:25.791 ************************************ 00:05:25.791 23:10:49 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:25.791 23:10:49 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:25.791 23:10:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.791 23:10:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.791 23:10:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.791 23:10:49 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:25.791 23:10:49 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:25.791 23:10:49 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:25.791 23:10:49 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:25.791 23:10:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.791 23:10:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.791 23:10:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.791 23:10:49 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:25.791 23:10:49 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:25.791 23:10:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.791 23:10:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.791 23:10:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.791 23:10:49 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:25.791 { 00:05:25.791 "name": "Malloc0", 00:05:25.791 "aliases": [ 00:05:25.791 "d2f0c5bf-3a68-45ec-9494-e5cec5fc11a6" 00:05:25.791 ], 00:05:25.791 "product_name": "Malloc disk", 00:05:25.791 "block_size": 512, 00:05:25.791 "num_blocks": 16384, 00:05:25.791 "uuid": "d2f0c5bf-3a68-45ec-9494-e5cec5fc11a6", 00:05:25.791 "assigned_rate_limits": { 00:05:25.791 "rw_ios_per_sec": 0, 00:05:25.791 "rw_mbytes_per_sec": 0, 00:05:25.791 "r_mbytes_per_sec": 0, 00:05:25.791 "w_mbytes_per_sec": 0 00:05:25.791 }, 00:05:25.791 "claimed": false, 00:05:25.791 "zoned": false, 00:05:25.791 "supported_io_types": { 00:05:25.791 "read": true, 00:05:25.791 "write": true, 00:05:25.791 "unmap": true, 00:05:25.791 "flush": true, 00:05:25.791 "reset": true, 00:05:25.791 "nvme_admin": false, 00:05:25.791 "nvme_io": false, 00:05:25.791 "nvme_io_md": false, 00:05:25.791 "write_zeroes": true, 00:05:25.791 "zcopy": true, 00:05:25.791 "get_zone_info": false, 00:05:25.791 "zone_management": false, 00:05:25.791 "zone_append": false, 00:05:25.791 "compare": false, 00:05:25.791 "compare_and_write": false, 00:05:25.791 "abort": true, 00:05:25.791 "seek_hole": false, 00:05:25.791 "seek_data": false, 00:05:25.791 "copy": true, 00:05:25.791 "nvme_iov_md": false 00:05:25.791 }, 00:05:25.791 "memory_domains": [ 00:05:25.791 { 00:05:25.791 "dma_device_id": "system", 00:05:25.791 "dma_device_type": 1 00:05:25.791 }, 00:05:25.791 { 00:05:25.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.791 "dma_device_type": 2 00:05:25.791 } 00:05:25.791 ], 00:05:25.791 "driver_specific": {} 00:05:25.791 } 00:05:25.791 ]' 00:05:25.791 23:10:49 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:25.791 23:10:49 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:25.791 23:10:49 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:25.791 23:10:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.791 23:10:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.791 [2024-11-17 23:10:49.418349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:25.791 [2024-11-17 23:10:49.418411] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:25.791 [2024-11-17 23:10:49.418439] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:25.791 [2024-11-17 23:10:49.418451] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:25.791 [2024-11-17 23:10:49.420770] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:25.791 [2024-11-17 23:10:49.420809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:25.791 Passthru0 00:05:25.791 23:10:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.791 23:10:49 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:25.791 23:10:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.791 23:10:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.791 23:10:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.791 23:10:49 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:25.791 { 00:05:25.791 "name": "Malloc0", 00:05:25.791 "aliases": [ 00:05:25.791 "d2f0c5bf-3a68-45ec-9494-e5cec5fc11a6" 00:05:25.791 ], 00:05:25.791 "product_name": "Malloc disk", 00:05:25.791 "block_size": 512, 00:05:25.791 "num_blocks": 16384, 00:05:25.791 "uuid": "d2f0c5bf-3a68-45ec-9494-e5cec5fc11a6", 00:05:25.791 "assigned_rate_limits": { 00:05:25.791 "rw_ios_per_sec": 0, 00:05:25.791 "rw_mbytes_per_sec": 0, 00:05:25.791 "r_mbytes_per_sec": 0, 00:05:25.791 "w_mbytes_per_sec": 0 00:05:25.791 }, 00:05:25.791 "claimed": true, 00:05:25.791 "claim_type": "exclusive_write", 00:05:25.791 "zoned": false, 00:05:25.791 "supported_io_types": { 00:05:25.791 "read": true, 00:05:25.791 "write": true, 00:05:25.791 "unmap": true, 00:05:25.791 "flush": true, 00:05:25.791 "reset": true, 00:05:25.791 "nvme_admin": false, 00:05:25.791 "nvme_io": false, 00:05:25.791 "nvme_io_md": false, 00:05:25.791 "write_zeroes": true, 00:05:25.791 "zcopy": true, 00:05:25.791 "get_zone_info": false, 00:05:25.791 "zone_management": false, 00:05:25.791 "zone_append": false, 00:05:25.791 "compare": false, 00:05:25.791 "compare_and_write": false, 00:05:25.791 "abort": true, 00:05:25.791 "seek_hole": false, 00:05:25.791 "seek_data": false, 00:05:25.791 "copy": true, 00:05:25.791 "nvme_iov_md": false 00:05:25.791 }, 00:05:25.791 "memory_domains": [ 00:05:25.791 { 00:05:25.791 "dma_device_id": "system", 00:05:25.791 "dma_device_type": 1 00:05:25.791 }, 00:05:25.791 { 00:05:25.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.791 "dma_device_type": 2 00:05:25.791 } 00:05:25.791 ], 00:05:25.791 "driver_specific": {} 00:05:25.791 }, 00:05:25.791 { 00:05:25.791 "name": "Passthru0", 00:05:25.791 "aliases": [ 00:05:25.791 "ed330e35-eddc-5669-b577-ac80c864d7b9" 00:05:25.791 ], 00:05:25.791 "product_name": "passthru", 00:05:25.791 "block_size": 512, 00:05:25.791 "num_blocks": 16384, 00:05:25.791 "uuid": "ed330e35-eddc-5669-b577-ac80c864d7b9", 00:05:25.791 "assigned_rate_limits": { 00:05:25.791 "rw_ios_per_sec": 0, 00:05:25.791 "rw_mbytes_per_sec": 0, 00:05:25.791 "r_mbytes_per_sec": 0, 00:05:25.791 "w_mbytes_per_sec": 0 00:05:25.791 }, 00:05:25.791 "claimed": false, 00:05:25.791 "zoned": false, 00:05:25.791 "supported_io_types": { 00:05:25.791 "read": true, 00:05:25.791 "write": true, 00:05:25.791 "unmap": true, 00:05:25.791 "flush": true, 00:05:25.791 "reset": true, 00:05:25.791 "nvme_admin": false, 00:05:25.791 "nvme_io": false, 00:05:25.791 "nvme_io_md": false, 00:05:25.791 "write_zeroes": true, 00:05:25.791 "zcopy": true, 00:05:25.791 "get_zone_info": false, 00:05:25.791 "zone_management": false, 00:05:25.791 "zone_append": false, 00:05:25.791 "compare": false, 00:05:25.791 "compare_and_write": false, 00:05:25.791 "abort": true, 00:05:25.791 "seek_hole": false, 00:05:25.791 "seek_data": false, 00:05:25.791 "copy": true, 00:05:25.791 "nvme_iov_md": false 00:05:25.791 }, 00:05:25.791 "memory_domains": [ 00:05:25.791 { 00:05:25.791 "dma_device_id": "system", 00:05:25.791 "dma_device_type": 1 00:05:25.791 }, 00:05:25.791 { 00:05:25.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.791 "dma_device_type": 2 00:05:25.791 } 00:05:25.791 ], 00:05:25.791 "driver_specific": { 00:05:25.791 "passthru": { 00:05:25.791 "name": "Passthru0", 00:05:25.791 "base_bdev_name": "Malloc0" 00:05:25.791 } 00:05:25.791 } 00:05:25.791 } 00:05:25.791 ]' 00:05:25.791 23:10:49 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:25.791 23:10:49 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:25.791 23:10:49 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:25.791 23:10:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.791 23:10:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.791 23:10:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.791 23:10:49 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:25.791 23:10:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.791 23:10:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.791 23:10:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.791 23:10:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:25.791 23:10:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.791 23:10:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.791 23:10:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.791 23:10:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:25.791 23:10:49 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:25.791 23:10:49 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:25.791 00:05:25.791 real 0m0.236s 00:05:25.791 user 0m0.136s 00:05:25.791 sys 0m0.023s 00:05:25.791 23:10:49 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.791 ************************************ 00:05:25.791 END TEST rpc_integrity 00:05:25.791 ************************************ 00:05:25.791 23:10:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.791 23:10:49 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:25.791 23:10:49 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.791 23:10:49 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.791 23:10:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.791 ************************************ 00:05:25.791 START TEST rpc_plugins 00:05:25.791 ************************************ 00:05:25.791 23:10:49 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:25.791 23:10:49 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:25.791 23:10:49 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.791 23:10:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.050 23:10:49 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.050 23:10:49 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:26.050 23:10:49 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:26.050 23:10:49 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.050 23:10:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.050 23:10:49 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.050 23:10:49 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:26.050 { 00:05:26.050 "name": "Malloc1", 00:05:26.050 "aliases": [ 00:05:26.050 "503fc3c7-c491-4a8c-a76a-f8fa1e28b9a8" 00:05:26.050 ], 00:05:26.050 "product_name": "Malloc disk", 00:05:26.050 "block_size": 4096, 00:05:26.050 "num_blocks": 256, 00:05:26.050 "uuid": "503fc3c7-c491-4a8c-a76a-f8fa1e28b9a8", 00:05:26.050 "assigned_rate_limits": { 00:05:26.050 "rw_ios_per_sec": 0, 00:05:26.050 "rw_mbytes_per_sec": 0, 00:05:26.050 "r_mbytes_per_sec": 0, 00:05:26.050 "w_mbytes_per_sec": 0 00:05:26.050 }, 00:05:26.050 "claimed": false, 00:05:26.050 "zoned": false, 00:05:26.050 "supported_io_types": { 00:05:26.050 "read": true, 00:05:26.050 "write": true, 00:05:26.050 "unmap": true, 00:05:26.050 "flush": true, 00:05:26.050 "reset": true, 00:05:26.050 "nvme_admin": false, 00:05:26.050 "nvme_io": false, 00:05:26.050 "nvme_io_md": false, 00:05:26.050 "write_zeroes": true, 00:05:26.050 "zcopy": true, 00:05:26.050 "get_zone_info": false, 00:05:26.050 "zone_management": false, 00:05:26.050 "zone_append": false, 00:05:26.050 "compare": false, 00:05:26.050 "compare_and_write": false, 00:05:26.050 "abort": true, 00:05:26.050 "seek_hole": false, 00:05:26.050 "seek_data": false, 00:05:26.050 "copy": true, 00:05:26.050 "nvme_iov_md": false 00:05:26.050 }, 00:05:26.050 "memory_domains": [ 00:05:26.050 { 00:05:26.050 "dma_device_id": "system", 00:05:26.050 "dma_device_type": 1 00:05:26.050 }, 00:05:26.050 { 00:05:26.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.050 "dma_device_type": 2 00:05:26.050 } 00:05:26.050 ], 00:05:26.050 "driver_specific": {} 00:05:26.050 } 00:05:26.050 ]' 00:05:26.050 23:10:49 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:26.050 23:10:49 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:26.050 23:10:49 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:26.050 23:10:49 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.050 23:10:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.050 23:10:49 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.050 23:10:49 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:26.050 23:10:49 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.050 23:10:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.050 23:10:49 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.050 23:10:49 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:26.050 23:10:49 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:26.050 23:10:49 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:26.050 00:05:26.050 real 0m0.107s 00:05:26.050 user 0m0.059s 00:05:26.050 sys 0m0.016s 00:05:26.050 23:10:49 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.050 23:10:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.050 ************************************ 00:05:26.050 END TEST rpc_plugins 00:05:26.050 ************************************ 00:05:26.050 23:10:49 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:26.050 23:10:49 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.050 23:10:49 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.050 23:10:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.050 ************************************ 00:05:26.050 START TEST rpc_trace_cmd_test 00:05:26.050 ************************************ 00:05:26.050 23:10:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:26.050 23:10:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:26.050 23:10:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:26.050 23:10:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.050 23:10:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:26.050 23:10:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.050 23:10:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:26.050 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69113", 00:05:26.050 "tpoint_group_mask": "0x8", 00:05:26.050 "iscsi_conn": { 00:05:26.050 "mask": "0x2", 00:05:26.050 "tpoint_mask": "0x0" 00:05:26.050 }, 00:05:26.050 "scsi": { 00:05:26.050 "mask": "0x4", 00:05:26.050 "tpoint_mask": "0x0" 00:05:26.050 }, 00:05:26.050 "bdev": { 00:05:26.050 "mask": "0x8", 00:05:26.050 "tpoint_mask": "0xffffffffffffffff" 00:05:26.050 }, 00:05:26.050 "nvmf_rdma": { 00:05:26.050 "mask": "0x10", 00:05:26.050 "tpoint_mask": "0x0" 00:05:26.050 }, 00:05:26.050 "nvmf_tcp": { 00:05:26.050 "mask": "0x20", 00:05:26.050 "tpoint_mask": "0x0" 00:05:26.050 }, 00:05:26.050 "ftl": { 00:05:26.050 "mask": "0x40", 00:05:26.050 "tpoint_mask": "0x0" 00:05:26.050 }, 00:05:26.050 "blobfs": { 00:05:26.050 "mask": "0x80", 00:05:26.050 "tpoint_mask": "0x0" 00:05:26.050 }, 00:05:26.050 "dsa": { 00:05:26.050 "mask": "0x200", 00:05:26.050 "tpoint_mask": "0x0" 00:05:26.050 }, 00:05:26.050 "thread": { 00:05:26.050 "mask": "0x400", 00:05:26.050 "tpoint_mask": "0x0" 00:05:26.050 }, 00:05:26.050 "nvme_pcie": { 00:05:26.050 "mask": "0x800", 00:05:26.050 "tpoint_mask": "0x0" 00:05:26.050 }, 00:05:26.050 "iaa": { 00:05:26.050 "mask": "0x1000", 00:05:26.050 "tpoint_mask": "0x0" 00:05:26.050 }, 00:05:26.050 "nvme_tcp": { 00:05:26.050 "mask": "0x2000", 00:05:26.050 "tpoint_mask": "0x0" 00:05:26.050 }, 00:05:26.050 "bdev_nvme": { 00:05:26.050 "mask": "0x4000", 00:05:26.050 "tpoint_mask": "0x0" 00:05:26.050 }, 00:05:26.050 "sock": { 00:05:26.050 "mask": "0x8000", 00:05:26.050 "tpoint_mask": "0x0" 00:05:26.050 }, 00:05:26.050 "blob": { 00:05:26.050 "mask": "0x10000", 00:05:26.050 "tpoint_mask": "0x0" 00:05:26.050 }, 00:05:26.050 "bdev_raid": { 00:05:26.050 "mask": "0x20000", 00:05:26.050 "tpoint_mask": "0x0" 00:05:26.050 }, 00:05:26.050 "scheduler": { 00:05:26.050 "mask": "0x40000", 00:05:26.050 "tpoint_mask": "0x0" 00:05:26.050 } 00:05:26.050 }' 00:05:26.050 23:10:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:26.050 23:10:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:26.050 23:10:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:26.050 23:10:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:26.050 23:10:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:26.310 23:10:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:26.310 23:10:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:26.310 23:10:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:26.310 23:10:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:26.310 23:10:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:26.310 00:05:26.310 real 0m0.178s 00:05:26.310 user 0m0.142s 00:05:26.310 sys 0m0.024s 00:05:26.310 23:10:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.310 ************************************ 00:05:26.310 END TEST rpc_trace_cmd_test 00:05:26.310 ************************************ 00:05:26.310 23:10:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:26.310 23:10:49 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:26.310 23:10:49 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:26.310 23:10:49 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:26.310 23:10:49 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.310 23:10:49 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.310 23:10:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.310 ************************************ 00:05:26.310 START TEST rpc_daemon_integrity 00:05:26.310 ************************************ 00:05:26.310 23:10:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:26.310 23:10:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:26.310 23:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.310 23:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.310 23:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.310 23:10:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:26.310 23:10:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:26.310 23:10:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:26.310 23:10:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:26.310 23:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.310 23:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.310 23:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.310 23:10:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:26.310 23:10:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:26.310 23:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.310 23:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.310 23:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.310 23:10:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:26.310 { 00:05:26.310 "name": "Malloc2", 00:05:26.310 "aliases": [ 00:05:26.310 "32bdd465-f574-40af-9c54-0927fd81dc52" 00:05:26.310 ], 00:05:26.310 "product_name": "Malloc disk", 00:05:26.310 "block_size": 512, 00:05:26.310 "num_blocks": 16384, 00:05:26.310 "uuid": "32bdd465-f574-40af-9c54-0927fd81dc52", 00:05:26.310 "assigned_rate_limits": { 00:05:26.310 "rw_ios_per_sec": 0, 00:05:26.310 "rw_mbytes_per_sec": 0, 00:05:26.310 "r_mbytes_per_sec": 0, 00:05:26.310 "w_mbytes_per_sec": 0 00:05:26.310 }, 00:05:26.310 "claimed": false, 00:05:26.310 "zoned": false, 00:05:26.310 "supported_io_types": { 00:05:26.310 "read": true, 00:05:26.310 "write": true, 00:05:26.310 "unmap": true, 00:05:26.310 "flush": true, 00:05:26.310 "reset": true, 00:05:26.310 "nvme_admin": false, 00:05:26.310 "nvme_io": false, 00:05:26.310 "nvme_io_md": false, 00:05:26.310 "write_zeroes": true, 00:05:26.310 "zcopy": true, 00:05:26.310 "get_zone_info": false, 00:05:26.310 "zone_management": false, 00:05:26.310 "zone_append": false, 00:05:26.310 "compare": false, 00:05:26.310 "compare_and_write": false, 00:05:26.310 "abort": true, 00:05:26.310 "seek_hole": false, 00:05:26.310 "seek_data": false, 00:05:26.310 "copy": true, 00:05:26.310 "nvme_iov_md": false 00:05:26.310 }, 00:05:26.310 "memory_domains": [ 00:05:26.310 { 00:05:26.310 "dma_device_id": "system", 00:05:26.310 "dma_device_type": 1 00:05:26.310 }, 00:05:26.310 { 00:05:26.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.310 "dma_device_type": 2 00:05:26.310 } 00:05:26.310 ], 00:05:26.310 "driver_specific": {} 00:05:26.310 } 00:05:26.310 ]' 00:05:26.310 23:10:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:26.310 23:10:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:26.310 23:10:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:26.310 23:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.310 23:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.310 [2024-11-17 23:10:50.106743] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:26.310 [2024-11-17 23:10:50.106796] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:26.310 [2024-11-17 23:10:50.106817] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:26.310 [2024-11-17 23:10:50.106826] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:26.310 [2024-11-17 23:10:50.109033] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:26.310 [2024-11-17 23:10:50.109067] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:26.310 Passthru0 00:05:26.310 23:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.310 23:10:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:26.310 23:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.310 23:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.570 23:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.570 23:10:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:26.570 { 00:05:26.570 "name": "Malloc2", 00:05:26.570 "aliases": [ 00:05:26.570 "32bdd465-f574-40af-9c54-0927fd81dc52" 00:05:26.570 ], 00:05:26.570 "product_name": "Malloc disk", 00:05:26.570 "block_size": 512, 00:05:26.570 "num_blocks": 16384, 00:05:26.570 "uuid": "32bdd465-f574-40af-9c54-0927fd81dc52", 00:05:26.570 "assigned_rate_limits": { 00:05:26.570 "rw_ios_per_sec": 0, 00:05:26.570 "rw_mbytes_per_sec": 0, 00:05:26.570 "r_mbytes_per_sec": 0, 00:05:26.570 "w_mbytes_per_sec": 0 00:05:26.570 }, 00:05:26.570 "claimed": true, 00:05:26.570 "claim_type": "exclusive_write", 00:05:26.570 "zoned": false, 00:05:26.570 "supported_io_types": { 00:05:26.570 "read": true, 00:05:26.570 "write": true, 00:05:26.570 "unmap": true, 00:05:26.570 "flush": true, 00:05:26.570 "reset": true, 00:05:26.570 "nvme_admin": false, 00:05:26.570 "nvme_io": false, 00:05:26.570 "nvme_io_md": false, 00:05:26.570 "write_zeroes": true, 00:05:26.570 "zcopy": true, 00:05:26.570 "get_zone_info": false, 00:05:26.570 "zone_management": false, 00:05:26.570 "zone_append": false, 00:05:26.570 "compare": false, 00:05:26.570 "compare_and_write": false, 00:05:26.570 "abort": true, 00:05:26.570 "seek_hole": false, 00:05:26.570 "seek_data": false, 00:05:26.570 "copy": true, 00:05:26.570 "nvme_iov_md": false 00:05:26.570 }, 00:05:26.570 "memory_domains": [ 00:05:26.570 { 00:05:26.570 "dma_device_id": "system", 00:05:26.570 "dma_device_type": 1 00:05:26.570 }, 00:05:26.570 { 00:05:26.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.570 "dma_device_type": 2 00:05:26.570 } 00:05:26.570 ], 00:05:26.570 "driver_specific": {} 00:05:26.570 }, 00:05:26.570 { 00:05:26.570 "name": "Passthru0", 00:05:26.570 "aliases": [ 00:05:26.570 "d54436f0-9242-51d7-9dce-23a2f5984879" 00:05:26.570 ], 00:05:26.570 "product_name": "passthru", 00:05:26.570 "block_size": 512, 00:05:26.570 "num_blocks": 16384, 00:05:26.570 "uuid": "d54436f0-9242-51d7-9dce-23a2f5984879", 00:05:26.570 "assigned_rate_limits": { 00:05:26.570 "rw_ios_per_sec": 0, 00:05:26.570 "rw_mbytes_per_sec": 0, 00:05:26.570 "r_mbytes_per_sec": 0, 00:05:26.570 "w_mbytes_per_sec": 0 00:05:26.570 }, 00:05:26.570 "claimed": false, 00:05:26.570 "zoned": false, 00:05:26.570 "supported_io_types": { 00:05:26.570 "read": true, 00:05:26.570 "write": true, 00:05:26.570 "unmap": true, 00:05:26.570 "flush": true, 00:05:26.570 "reset": true, 00:05:26.570 "nvme_admin": false, 00:05:26.570 "nvme_io": false, 00:05:26.570 "nvme_io_md": false, 00:05:26.570 "write_zeroes": true, 00:05:26.570 "zcopy": true, 00:05:26.570 "get_zone_info": false, 00:05:26.570 "zone_management": false, 00:05:26.570 "zone_append": false, 00:05:26.570 "compare": false, 00:05:26.570 "compare_and_write": false, 00:05:26.570 "abort": true, 00:05:26.570 "seek_hole": false, 00:05:26.570 "seek_data": false, 00:05:26.570 "copy": true, 00:05:26.570 "nvme_iov_md": false 00:05:26.570 }, 00:05:26.570 "memory_domains": [ 00:05:26.570 { 00:05:26.570 "dma_device_id": "system", 00:05:26.570 "dma_device_type": 1 00:05:26.570 }, 00:05:26.570 { 00:05:26.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.570 "dma_device_type": 2 00:05:26.570 } 00:05:26.570 ], 00:05:26.570 "driver_specific": { 00:05:26.570 "passthru": { 00:05:26.570 "name": "Passthru0", 00:05:26.570 "base_bdev_name": "Malloc2" 00:05:26.570 } 00:05:26.570 } 00:05:26.570 } 00:05:26.570 ]' 00:05:26.570 23:10:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:26.570 23:10:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:26.570 23:10:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:26.570 23:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.570 23:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.570 23:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.570 23:10:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:26.570 23:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.570 23:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.570 23:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.570 23:10:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:26.570 23:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.570 23:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.570 23:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.570 23:10:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:26.570 23:10:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:26.570 23:10:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:26.570 00:05:26.570 real 0m0.215s 00:05:26.570 user 0m0.122s 00:05:26.570 sys 0m0.031s 00:05:26.570 ************************************ 00:05:26.570 END TEST rpc_daemon_integrity 00:05:26.570 ************************************ 00:05:26.570 23:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.570 23:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.570 23:10:50 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:26.570 23:10:50 rpc -- rpc/rpc.sh@84 -- # killprocess 69113 00:05:26.570 23:10:50 rpc -- common/autotest_common.sh@954 -- # '[' -z 69113 ']' 00:05:26.570 23:10:50 rpc -- common/autotest_common.sh@958 -- # kill -0 69113 00:05:26.570 23:10:50 rpc -- common/autotest_common.sh@959 -- # uname 00:05:26.570 23:10:50 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:26.570 23:10:50 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69113 00:05:26.570 23:10:50 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:26.570 23:10:50 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:26.570 killing process with pid 69113 00:05:26.570 23:10:50 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69113' 00:05:26.570 23:10:50 rpc -- common/autotest_common.sh@973 -- # kill 69113 00:05:26.570 23:10:50 rpc -- common/autotest_common.sh@978 -- # wait 69113 00:05:26.830 00:05:26.830 real 0m2.307s 00:05:26.830 user 0m2.776s 00:05:26.830 sys 0m0.548s 00:05:26.830 23:10:50 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.830 ************************************ 00:05:26.830 END TEST rpc 00:05:26.830 ************************************ 00:05:26.830 23:10:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.830 23:10:50 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:26.830 23:10:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.830 23:10:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.830 23:10:50 -- common/autotest_common.sh@10 -- # set +x 00:05:26.830 ************************************ 00:05:26.830 START TEST skip_rpc 00:05:26.830 ************************************ 00:05:26.830 23:10:50 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:27.088 * Looking for test storage... 00:05:27.088 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:27.088 23:10:50 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:27.088 23:10:50 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:27.088 23:10:50 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:27.089 23:10:50 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:27.089 23:10:50 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.089 23:10:50 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.089 23:10:50 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.089 23:10:50 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.089 23:10:50 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.089 23:10:50 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.089 23:10:50 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.089 23:10:50 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.089 23:10:50 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.089 23:10:50 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.089 23:10:50 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.089 23:10:50 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:27.089 23:10:50 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:27.089 23:10:50 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.089 23:10:50 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.089 23:10:50 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:27.089 23:10:50 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:27.089 23:10:50 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.089 23:10:50 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:27.089 23:10:50 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.089 23:10:50 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:27.089 23:10:50 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:27.089 23:10:50 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.089 23:10:50 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:27.089 23:10:50 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.089 23:10:50 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.089 23:10:50 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.089 23:10:50 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:27.089 23:10:50 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.089 23:10:50 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:27.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.089 --rc genhtml_branch_coverage=1 00:05:27.089 --rc genhtml_function_coverage=1 00:05:27.089 --rc genhtml_legend=1 00:05:27.089 --rc geninfo_all_blocks=1 00:05:27.089 --rc geninfo_unexecuted_blocks=1 00:05:27.089 00:05:27.089 ' 00:05:27.089 23:10:50 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:27.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.089 --rc genhtml_branch_coverage=1 00:05:27.089 --rc genhtml_function_coverage=1 00:05:27.089 --rc genhtml_legend=1 00:05:27.089 --rc geninfo_all_blocks=1 00:05:27.089 --rc geninfo_unexecuted_blocks=1 00:05:27.089 00:05:27.089 ' 00:05:27.089 23:10:50 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:27.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.089 --rc genhtml_branch_coverage=1 00:05:27.089 --rc genhtml_function_coverage=1 00:05:27.089 --rc genhtml_legend=1 00:05:27.089 --rc geninfo_all_blocks=1 00:05:27.089 --rc geninfo_unexecuted_blocks=1 00:05:27.089 00:05:27.089 ' 00:05:27.089 23:10:50 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:27.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.089 --rc genhtml_branch_coverage=1 00:05:27.089 --rc genhtml_function_coverage=1 00:05:27.089 --rc genhtml_legend=1 00:05:27.089 --rc geninfo_all_blocks=1 00:05:27.089 --rc geninfo_unexecuted_blocks=1 00:05:27.089 00:05:27.089 ' 00:05:27.089 23:10:50 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:27.089 23:10:50 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:27.089 23:10:50 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:27.089 23:10:50 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.089 23:10:50 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.089 23:10:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.089 ************************************ 00:05:27.089 START TEST skip_rpc 00:05:27.089 ************************************ 00:05:27.089 23:10:50 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:27.089 23:10:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69309 00:05:27.089 23:10:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:27.089 23:10:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:27.089 23:10:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:27.089 [2024-11-17 23:10:50.833906] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:27.089 [2024-11-17 23:10:50.834326] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69309 ] 00:05:27.348 [2024-11-17 23:10:50.981299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.348 [2024-11-17 23:10:51.000251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.615 23:10:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:32.615 23:10:55 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:32.615 23:10:55 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:32.615 23:10:55 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:32.615 23:10:55 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.615 23:10:55 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:32.615 23:10:55 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.615 23:10:55 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:32.615 23:10:55 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.615 23:10:55 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.615 23:10:55 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:32.615 23:10:55 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:32.615 23:10:55 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:32.615 23:10:55 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:32.615 23:10:55 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:32.615 23:10:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:32.615 23:10:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69309 00:05:32.615 23:10:55 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 69309 ']' 00:05:32.615 23:10:55 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 69309 00:05:32.615 23:10:55 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:32.615 23:10:55 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.615 23:10:55 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69309 00:05:32.615 23:10:55 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.615 killing process with pid 69309 00:05:32.615 23:10:55 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.615 23:10:55 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69309' 00:05:32.615 23:10:55 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 69309 00:05:32.615 23:10:55 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 69309 00:05:32.615 00:05:32.615 real 0m5.252s 00:05:32.615 user 0m4.930s 00:05:32.615 sys 0m0.222s 00:05:32.615 23:10:56 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.615 ************************************ 00:05:32.615 END TEST skip_rpc 00:05:32.615 ************************************ 00:05:32.615 23:10:56 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.615 23:10:56 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:32.615 23:10:56 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.615 23:10:56 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.615 23:10:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.615 ************************************ 00:05:32.615 START TEST skip_rpc_with_json 00:05:32.615 ************************************ 00:05:32.615 23:10:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:32.615 23:10:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:32.615 23:10:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69391 00:05:32.615 23:10:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.615 23:10:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69391 00:05:32.615 23:10:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 69391 ']' 00:05:32.615 23:10:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.615 23:10:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.615 23:10:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:32.615 23:10:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.615 23:10:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.615 23:10:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:32.615 [2024-11-17 23:10:56.138861] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:32.615 [2024-11-17 23:10:56.138990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69391 ] 00:05:32.615 [2024-11-17 23:10:56.279242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.615 [2024-11-17 23:10:56.295804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.182 23:10:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.182 23:10:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:33.182 23:10:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:33.182 23:10:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.183 23:10:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:33.183 [2024-11-17 23:10:56.970859] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:33.183 request: 00:05:33.183 { 00:05:33.183 "trtype": "tcp", 00:05:33.183 "method": "nvmf_get_transports", 00:05:33.183 "req_id": 1 00:05:33.183 } 00:05:33.183 Got JSON-RPC error response 00:05:33.183 response: 00:05:33.183 { 00:05:33.183 "code": -19, 00:05:33.183 "message": "No such device" 00:05:33.183 } 00:05:33.183 23:10:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:33.183 23:10:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:33.183 23:10:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.183 23:10:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:33.183 [2024-11-17 23:10:56.982958] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:33.183 23:10:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.183 23:10:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:33.183 23:10:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.183 23:10:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:33.441 23:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.441 23:10:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:33.441 { 00:05:33.441 "subsystems": [ 00:05:33.441 { 00:05:33.441 "subsystem": "fsdev", 00:05:33.441 "config": [ 00:05:33.441 { 00:05:33.441 "method": "fsdev_set_opts", 00:05:33.441 "params": { 00:05:33.441 "fsdev_io_pool_size": 65535, 00:05:33.441 "fsdev_io_cache_size": 256 00:05:33.441 } 00:05:33.441 } 00:05:33.441 ] 00:05:33.441 }, 00:05:33.441 { 00:05:33.441 "subsystem": "keyring", 00:05:33.441 "config": [] 00:05:33.441 }, 00:05:33.441 { 00:05:33.441 "subsystem": "iobuf", 00:05:33.441 "config": [ 00:05:33.441 { 00:05:33.441 "method": "iobuf_set_options", 00:05:33.441 "params": { 00:05:33.441 "small_pool_count": 8192, 00:05:33.441 "large_pool_count": 1024, 00:05:33.441 "small_bufsize": 8192, 00:05:33.441 "large_bufsize": 135168, 00:05:33.441 "enable_numa": false 00:05:33.441 } 00:05:33.441 } 00:05:33.441 ] 00:05:33.441 }, 00:05:33.441 { 00:05:33.441 "subsystem": "sock", 00:05:33.441 "config": [ 00:05:33.441 { 00:05:33.441 "method": "sock_set_default_impl", 00:05:33.441 "params": { 00:05:33.441 "impl_name": "posix" 00:05:33.441 } 00:05:33.441 }, 00:05:33.441 { 00:05:33.441 "method": "sock_impl_set_options", 00:05:33.441 "params": { 00:05:33.441 "impl_name": "ssl", 00:05:33.441 "recv_buf_size": 4096, 00:05:33.441 "send_buf_size": 4096, 00:05:33.441 "enable_recv_pipe": true, 00:05:33.441 "enable_quickack": false, 00:05:33.441 "enable_placement_id": 0, 00:05:33.441 "enable_zerocopy_send_server": true, 00:05:33.441 "enable_zerocopy_send_client": false, 00:05:33.441 "zerocopy_threshold": 0, 00:05:33.441 "tls_version": 0, 00:05:33.441 "enable_ktls": false 00:05:33.441 } 00:05:33.441 }, 00:05:33.441 { 00:05:33.441 "method": "sock_impl_set_options", 00:05:33.441 "params": { 00:05:33.441 "impl_name": "posix", 00:05:33.441 "recv_buf_size": 2097152, 00:05:33.441 "send_buf_size": 2097152, 00:05:33.441 "enable_recv_pipe": true, 00:05:33.441 "enable_quickack": false, 00:05:33.441 "enable_placement_id": 0, 00:05:33.441 "enable_zerocopy_send_server": true, 00:05:33.441 "enable_zerocopy_send_client": false, 00:05:33.441 "zerocopy_threshold": 0, 00:05:33.441 "tls_version": 0, 00:05:33.441 "enable_ktls": false 00:05:33.441 } 00:05:33.441 } 00:05:33.441 ] 00:05:33.441 }, 00:05:33.441 { 00:05:33.441 "subsystem": "vmd", 00:05:33.441 "config": [] 00:05:33.441 }, 00:05:33.441 { 00:05:33.441 "subsystem": "accel", 00:05:33.441 "config": [ 00:05:33.441 { 00:05:33.441 "method": "accel_set_options", 00:05:33.441 "params": { 00:05:33.441 "small_cache_size": 128, 00:05:33.441 "large_cache_size": 16, 00:05:33.441 "task_count": 2048, 00:05:33.441 "sequence_count": 2048, 00:05:33.441 "buf_count": 2048 00:05:33.441 } 00:05:33.441 } 00:05:33.441 ] 00:05:33.441 }, 00:05:33.441 { 00:05:33.441 "subsystem": "bdev", 00:05:33.441 "config": [ 00:05:33.441 { 00:05:33.441 "method": "bdev_set_options", 00:05:33.441 "params": { 00:05:33.441 "bdev_io_pool_size": 65535, 00:05:33.441 "bdev_io_cache_size": 256, 00:05:33.441 "bdev_auto_examine": true, 00:05:33.441 "iobuf_small_cache_size": 128, 00:05:33.441 "iobuf_large_cache_size": 16 00:05:33.441 } 00:05:33.441 }, 00:05:33.441 { 00:05:33.441 "method": "bdev_raid_set_options", 00:05:33.442 "params": { 00:05:33.442 "process_window_size_kb": 1024, 00:05:33.442 "process_max_bandwidth_mb_sec": 0 00:05:33.442 } 00:05:33.442 }, 00:05:33.442 { 00:05:33.442 "method": "bdev_iscsi_set_options", 00:05:33.442 "params": { 00:05:33.442 "timeout_sec": 30 00:05:33.442 } 00:05:33.442 }, 00:05:33.442 { 00:05:33.442 "method": "bdev_nvme_set_options", 00:05:33.442 "params": { 00:05:33.442 "action_on_timeout": "none", 00:05:33.442 "timeout_us": 0, 00:05:33.442 "timeout_admin_us": 0, 00:05:33.442 "keep_alive_timeout_ms": 10000, 00:05:33.442 "arbitration_burst": 0, 00:05:33.442 "low_priority_weight": 0, 00:05:33.442 "medium_priority_weight": 0, 00:05:33.442 "high_priority_weight": 0, 00:05:33.442 "nvme_adminq_poll_period_us": 10000, 00:05:33.442 "nvme_ioq_poll_period_us": 0, 00:05:33.442 "io_queue_requests": 0, 00:05:33.442 "delay_cmd_submit": true, 00:05:33.442 "transport_retry_count": 4, 00:05:33.442 "bdev_retry_count": 3, 00:05:33.442 "transport_ack_timeout": 0, 00:05:33.442 "ctrlr_loss_timeout_sec": 0, 00:05:33.442 "reconnect_delay_sec": 0, 00:05:33.442 "fast_io_fail_timeout_sec": 0, 00:05:33.442 "disable_auto_failback": false, 00:05:33.442 "generate_uuids": false, 00:05:33.442 "transport_tos": 0, 00:05:33.442 "nvme_error_stat": false, 00:05:33.442 "rdma_srq_size": 0, 00:05:33.442 "io_path_stat": false, 00:05:33.442 "allow_accel_sequence": false, 00:05:33.442 "rdma_max_cq_size": 0, 00:05:33.442 "rdma_cm_event_timeout_ms": 0, 00:05:33.442 "dhchap_digests": [ 00:05:33.442 "sha256", 00:05:33.442 "sha384", 00:05:33.442 "sha512" 00:05:33.442 ], 00:05:33.442 "dhchap_dhgroups": [ 00:05:33.442 "null", 00:05:33.442 "ffdhe2048", 00:05:33.442 "ffdhe3072", 00:05:33.442 "ffdhe4096", 00:05:33.442 "ffdhe6144", 00:05:33.442 "ffdhe8192" 00:05:33.442 ] 00:05:33.442 } 00:05:33.442 }, 00:05:33.442 { 00:05:33.442 "method": "bdev_nvme_set_hotplug", 00:05:33.442 "params": { 00:05:33.442 "period_us": 100000, 00:05:33.442 "enable": false 00:05:33.442 } 00:05:33.442 }, 00:05:33.442 { 00:05:33.442 "method": "bdev_wait_for_examine" 00:05:33.442 } 00:05:33.442 ] 00:05:33.442 }, 00:05:33.442 { 00:05:33.442 "subsystem": "scsi", 00:05:33.442 "config": null 00:05:33.442 }, 00:05:33.442 { 00:05:33.442 "subsystem": "scheduler", 00:05:33.442 "config": [ 00:05:33.442 { 00:05:33.442 "method": "framework_set_scheduler", 00:05:33.442 "params": { 00:05:33.442 "name": "static" 00:05:33.442 } 00:05:33.442 } 00:05:33.442 ] 00:05:33.442 }, 00:05:33.442 { 00:05:33.442 "subsystem": "vhost_scsi", 00:05:33.442 "config": [] 00:05:33.442 }, 00:05:33.442 { 00:05:33.442 "subsystem": "vhost_blk", 00:05:33.442 "config": [] 00:05:33.442 }, 00:05:33.442 { 00:05:33.442 "subsystem": "ublk", 00:05:33.442 "config": [] 00:05:33.442 }, 00:05:33.442 { 00:05:33.442 "subsystem": "nbd", 00:05:33.442 "config": [] 00:05:33.442 }, 00:05:33.442 { 00:05:33.442 "subsystem": "nvmf", 00:05:33.442 "config": [ 00:05:33.442 { 00:05:33.442 "method": "nvmf_set_config", 00:05:33.442 "params": { 00:05:33.442 "discovery_filter": "match_any", 00:05:33.442 "admin_cmd_passthru": { 00:05:33.442 "identify_ctrlr": false 00:05:33.442 }, 00:05:33.442 "dhchap_digests": [ 00:05:33.442 "sha256", 00:05:33.442 "sha384", 00:05:33.442 "sha512" 00:05:33.442 ], 00:05:33.442 "dhchap_dhgroups": [ 00:05:33.442 "null", 00:05:33.442 "ffdhe2048", 00:05:33.442 "ffdhe3072", 00:05:33.442 "ffdhe4096", 00:05:33.442 "ffdhe6144", 00:05:33.442 "ffdhe8192" 00:05:33.442 ] 00:05:33.442 } 00:05:33.442 }, 00:05:33.442 { 00:05:33.442 "method": "nvmf_set_max_subsystems", 00:05:33.442 "params": { 00:05:33.442 "max_subsystems": 1024 00:05:33.442 } 00:05:33.442 }, 00:05:33.442 { 00:05:33.442 "method": "nvmf_set_crdt", 00:05:33.442 "params": { 00:05:33.442 "crdt1": 0, 00:05:33.442 "crdt2": 0, 00:05:33.442 "crdt3": 0 00:05:33.442 } 00:05:33.442 }, 00:05:33.442 { 00:05:33.442 "method": "nvmf_create_transport", 00:05:33.442 "params": { 00:05:33.442 "trtype": "TCP", 00:05:33.442 "max_queue_depth": 128, 00:05:33.442 "max_io_qpairs_per_ctrlr": 127, 00:05:33.442 "in_capsule_data_size": 4096, 00:05:33.442 "max_io_size": 131072, 00:05:33.442 "io_unit_size": 131072, 00:05:33.442 "max_aq_depth": 128, 00:05:33.442 "num_shared_buffers": 511, 00:05:33.442 "buf_cache_size": 4294967295, 00:05:33.442 "dif_insert_or_strip": false, 00:05:33.442 "zcopy": false, 00:05:33.442 "c2h_success": true, 00:05:33.442 "sock_priority": 0, 00:05:33.442 "abort_timeout_sec": 1, 00:05:33.442 "ack_timeout": 0, 00:05:33.442 "data_wr_pool_size": 0 00:05:33.442 } 00:05:33.442 } 00:05:33.442 ] 00:05:33.442 }, 00:05:33.442 { 00:05:33.442 "subsystem": "iscsi", 00:05:33.442 "config": [ 00:05:33.442 { 00:05:33.442 "method": "iscsi_set_options", 00:05:33.442 "params": { 00:05:33.442 "node_base": "iqn.2016-06.io.spdk", 00:05:33.442 "max_sessions": 128, 00:05:33.442 "max_connections_per_session": 2, 00:05:33.442 "max_queue_depth": 64, 00:05:33.442 "default_time2wait": 2, 00:05:33.442 "default_time2retain": 20, 00:05:33.442 "first_burst_length": 8192, 00:05:33.442 "immediate_data": true, 00:05:33.442 "allow_duplicated_isid": false, 00:05:33.442 "error_recovery_level": 0, 00:05:33.442 "nop_timeout": 60, 00:05:33.442 "nop_in_interval": 30, 00:05:33.442 "disable_chap": false, 00:05:33.442 "require_chap": false, 00:05:33.442 "mutual_chap": false, 00:05:33.442 "chap_group": 0, 00:05:33.442 "max_large_datain_per_connection": 64, 00:05:33.442 "max_r2t_per_connection": 4, 00:05:33.442 "pdu_pool_size": 36864, 00:05:33.442 "immediate_data_pool_size": 16384, 00:05:33.442 "data_out_pool_size": 2048 00:05:33.442 } 00:05:33.442 } 00:05:33.442 ] 00:05:33.442 } 00:05:33.442 ] 00:05:33.442 } 00:05:33.442 23:10:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:33.442 23:10:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69391 00:05:33.442 23:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69391 ']' 00:05:33.442 23:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69391 00:05:33.442 23:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:33.442 23:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.442 23:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69391 00:05:33.442 23:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.442 killing process with pid 69391 00:05:33.442 23:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.442 23:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69391' 00:05:33.442 23:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69391 00:05:33.442 23:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69391 00:05:33.701 23:10:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69420 00:05:33.701 23:10:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:33.701 23:10:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:39.032 23:11:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69420 00:05:39.032 23:11:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69420 ']' 00:05:39.032 23:11:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69420 00:05:39.032 23:11:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:39.032 23:11:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.032 23:11:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69420 00:05:39.032 23:11:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.032 23:11:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.032 killing process with pid 69420 00:05:39.032 23:11:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69420' 00:05:39.032 23:11:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69420 00:05:39.032 23:11:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69420 00:05:39.032 23:11:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:39.032 23:11:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:39.032 00:05:39.032 real 0m6.572s 00:05:39.032 user 0m6.287s 00:05:39.032 sys 0m0.513s 00:05:39.032 23:11:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.032 ************************************ 00:05:39.032 END TEST skip_rpc_with_json 00:05:39.032 ************************************ 00:05:39.032 23:11:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:39.032 23:11:02 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:39.032 23:11:02 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.032 23:11:02 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.032 23:11:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.032 ************************************ 00:05:39.032 START TEST skip_rpc_with_delay 00:05:39.032 ************************************ 00:05:39.032 23:11:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:39.032 23:11:02 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:39.032 23:11:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:39.032 23:11:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:39.032 23:11:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.032 23:11:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.032 23:11:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.032 23:11:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.032 23:11:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.032 23:11:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.032 23:11:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.032 23:11:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:39.032 23:11:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:39.032 [2024-11-17 23:11:02.774628] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:39.032 23:11:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:39.032 23:11:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:39.032 23:11:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:39.032 23:11:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:39.032 00:05:39.032 real 0m0.125s 00:05:39.032 user 0m0.057s 00:05:39.032 sys 0m0.063s 00:05:39.032 ************************************ 00:05:39.032 END TEST skip_rpc_with_delay 00:05:39.032 23:11:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.032 23:11:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:39.032 ************************************ 00:05:39.290 23:11:02 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:39.290 23:11:02 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:39.290 23:11:02 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:39.290 23:11:02 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.290 23:11:02 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.290 23:11:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.290 ************************************ 00:05:39.290 START TEST exit_on_failed_rpc_init 00:05:39.290 ************************************ 00:05:39.290 23:11:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:39.290 23:11:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69531 00:05:39.290 23:11:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69531 00:05:39.290 23:11:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 69531 ']' 00:05:39.290 23:11:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.290 23:11:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.290 23:11:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.290 23:11:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.290 23:11:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:39.290 23:11:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:39.290 [2024-11-17 23:11:02.951402] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:39.290 [2024-11-17 23:11:02.951522] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69531 ] 00:05:39.290 [2024-11-17 23:11:03.096131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.547 [2024-11-17 23:11:03.115256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.113 23:11:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.113 23:11:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:40.113 23:11:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.113 23:11:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:40.113 23:11:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:40.113 23:11:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:40.113 23:11:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:40.113 23:11:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.113 23:11:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:40.113 23:11:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.113 23:11:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:40.113 23:11:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.113 23:11:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:40.113 23:11:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:40.113 23:11:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:40.113 [2024-11-17 23:11:03.854367] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:40.113 [2024-11-17 23:11:03.854519] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69549 ] 00:05:40.372 [2024-11-17 23:11:03.995700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.372 [2024-11-17 23:11:04.014246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.372 [2024-11-17 23:11:04.014320] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:40.372 [2024-11-17 23:11:04.014334] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:40.372 [2024-11-17 23:11:04.014348] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:40.372 23:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:40.372 23:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:40.372 23:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:40.372 23:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:40.372 23:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:40.372 23:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:40.372 23:11:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:40.372 23:11:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69531 00:05:40.372 23:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 69531 ']' 00:05:40.372 23:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 69531 00:05:40.372 23:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:40.372 23:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.372 23:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69531 00:05:40.372 23:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.372 23:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.372 killing process with pid 69531 00:05:40.372 23:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69531' 00:05:40.372 23:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 69531 00:05:40.372 23:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 69531 00:05:40.630 00:05:40.630 real 0m1.472s 00:05:40.630 user 0m1.621s 00:05:40.630 sys 0m0.344s 00:05:40.630 23:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.630 ************************************ 00:05:40.630 END TEST exit_on_failed_rpc_init 00:05:40.630 ************************************ 00:05:40.630 23:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:40.630 23:11:04 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:40.630 00:05:40.630 real 0m13.809s 00:05:40.630 user 0m13.042s 00:05:40.630 sys 0m1.320s 00:05:40.630 23:11:04 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.630 ************************************ 00:05:40.630 END TEST skip_rpc 00:05:40.630 ************************************ 00:05:40.630 23:11:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.889 23:11:04 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:40.889 23:11:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.889 23:11:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.889 23:11:04 -- common/autotest_common.sh@10 -- # set +x 00:05:40.889 ************************************ 00:05:40.889 START TEST rpc_client 00:05:40.889 ************************************ 00:05:40.889 23:11:04 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:40.889 * Looking for test storage... 00:05:40.889 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:40.889 23:11:04 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:40.889 23:11:04 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:40.889 23:11:04 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:40.889 23:11:04 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:40.889 23:11:04 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.889 23:11:04 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.889 23:11:04 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.889 23:11:04 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.889 23:11:04 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.890 23:11:04 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.890 23:11:04 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.890 23:11:04 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.890 23:11:04 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.890 23:11:04 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.890 23:11:04 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.890 23:11:04 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:40.890 23:11:04 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:40.890 23:11:04 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.890 23:11:04 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.890 23:11:04 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:40.890 23:11:04 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:40.890 23:11:04 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.890 23:11:04 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:40.890 23:11:04 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.890 23:11:04 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:40.890 23:11:04 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:40.890 23:11:04 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.890 23:11:04 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:40.890 23:11:04 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.890 23:11:04 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.890 23:11:04 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.890 23:11:04 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:40.890 23:11:04 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.890 23:11:04 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:40.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.890 --rc genhtml_branch_coverage=1 00:05:40.890 --rc genhtml_function_coverage=1 00:05:40.890 --rc genhtml_legend=1 00:05:40.890 --rc geninfo_all_blocks=1 00:05:40.890 --rc geninfo_unexecuted_blocks=1 00:05:40.890 00:05:40.890 ' 00:05:40.890 23:11:04 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:40.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.890 --rc genhtml_branch_coverage=1 00:05:40.890 --rc genhtml_function_coverage=1 00:05:40.890 --rc genhtml_legend=1 00:05:40.890 --rc geninfo_all_blocks=1 00:05:40.890 --rc geninfo_unexecuted_blocks=1 00:05:40.890 00:05:40.890 ' 00:05:40.890 23:11:04 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:40.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.890 --rc genhtml_branch_coverage=1 00:05:40.890 --rc genhtml_function_coverage=1 00:05:40.890 --rc genhtml_legend=1 00:05:40.890 --rc geninfo_all_blocks=1 00:05:40.890 --rc geninfo_unexecuted_blocks=1 00:05:40.890 00:05:40.890 ' 00:05:40.890 23:11:04 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:40.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.890 --rc genhtml_branch_coverage=1 00:05:40.890 --rc genhtml_function_coverage=1 00:05:40.890 --rc genhtml_legend=1 00:05:40.890 --rc geninfo_all_blocks=1 00:05:40.890 --rc geninfo_unexecuted_blocks=1 00:05:40.890 00:05:40.890 ' 00:05:40.890 23:11:04 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:40.890 OK 00:05:40.890 23:11:04 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:40.890 ************************************ 00:05:40.890 END TEST rpc_client 00:05:40.890 ************************************ 00:05:40.890 00:05:40.890 real 0m0.187s 00:05:40.890 user 0m0.103s 00:05:40.890 sys 0m0.086s 00:05:40.890 23:11:04 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.890 23:11:04 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:40.890 23:11:04 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:40.890 23:11:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.890 23:11:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.890 23:11:04 -- common/autotest_common.sh@10 -- # set +x 00:05:40.890 ************************************ 00:05:40.890 START TEST json_config 00:05:40.890 ************************************ 00:05:40.890 23:11:04 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:41.149 23:11:04 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:41.149 23:11:04 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:41.149 23:11:04 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:41.149 23:11:04 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:41.149 23:11:04 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.149 23:11:04 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.149 23:11:04 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.149 23:11:04 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.149 23:11:04 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.149 23:11:04 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.149 23:11:04 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.149 23:11:04 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.149 23:11:04 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.149 23:11:04 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.149 23:11:04 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.149 23:11:04 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:41.149 23:11:04 json_config -- scripts/common.sh@345 -- # : 1 00:05:41.149 23:11:04 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.149 23:11:04 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.149 23:11:04 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:41.149 23:11:04 json_config -- scripts/common.sh@353 -- # local d=1 00:05:41.149 23:11:04 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.149 23:11:04 json_config -- scripts/common.sh@355 -- # echo 1 00:05:41.149 23:11:04 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.149 23:11:04 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:41.149 23:11:04 json_config -- scripts/common.sh@353 -- # local d=2 00:05:41.149 23:11:04 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.149 23:11:04 json_config -- scripts/common.sh@355 -- # echo 2 00:05:41.149 23:11:04 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.149 23:11:04 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.149 23:11:04 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.149 23:11:04 json_config -- scripts/common.sh@368 -- # return 0 00:05:41.149 23:11:04 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.149 23:11:04 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:41.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.149 --rc genhtml_branch_coverage=1 00:05:41.149 --rc genhtml_function_coverage=1 00:05:41.149 --rc genhtml_legend=1 00:05:41.149 --rc geninfo_all_blocks=1 00:05:41.149 --rc geninfo_unexecuted_blocks=1 00:05:41.149 00:05:41.149 ' 00:05:41.149 23:11:04 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:41.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.149 --rc genhtml_branch_coverage=1 00:05:41.149 --rc genhtml_function_coverage=1 00:05:41.149 --rc genhtml_legend=1 00:05:41.149 --rc geninfo_all_blocks=1 00:05:41.149 --rc geninfo_unexecuted_blocks=1 00:05:41.149 00:05:41.149 ' 00:05:41.149 23:11:04 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:41.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.149 --rc genhtml_branch_coverage=1 00:05:41.149 --rc genhtml_function_coverage=1 00:05:41.149 --rc genhtml_legend=1 00:05:41.149 --rc geninfo_all_blocks=1 00:05:41.149 --rc geninfo_unexecuted_blocks=1 00:05:41.149 00:05:41.149 ' 00:05:41.149 23:11:04 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:41.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.149 --rc genhtml_branch_coverage=1 00:05:41.149 --rc genhtml_function_coverage=1 00:05:41.149 --rc genhtml_legend=1 00:05:41.149 --rc geninfo_all_blocks=1 00:05:41.149 --rc geninfo_unexecuted_blocks=1 00:05:41.149 00:05:41.149 ' 00:05:41.149 23:11:04 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:41.149 23:11:04 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:41.149 23:11:04 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:41.149 23:11:04 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:41.150 23:11:04 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:41.150 23:11:04 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:41.150 23:11:04 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:41.150 23:11:04 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:41.150 23:11:04 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:41.150 23:11:04 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:41.150 23:11:04 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:41.150 23:11:04 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:41.150 23:11:04 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5eabce85-047d-4605-a6cf-5b958243ebf4 00:05:41.150 23:11:04 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5eabce85-047d-4605-a6cf-5b958243ebf4 00:05:41.150 23:11:04 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:41.150 23:11:04 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:41.150 23:11:04 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:41.150 23:11:04 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:41.150 23:11:04 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:41.150 23:11:04 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:41.150 23:11:04 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.150 23:11:04 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.150 23:11:04 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.150 23:11:04 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.150 23:11:04 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.150 23:11:04 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.150 23:11:04 json_config -- paths/export.sh@5 -- # export PATH 00:05:41.150 23:11:04 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.150 23:11:04 json_config -- nvmf/common.sh@51 -- # : 0 00:05:41.150 23:11:04 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:41.150 23:11:04 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:41.150 23:11:04 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:41.150 23:11:04 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:41.150 23:11:04 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:41.150 23:11:04 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:41.150 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:41.150 23:11:04 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:41.150 23:11:04 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:41.150 23:11:04 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:41.150 23:11:04 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:41.150 WARNING: No tests are enabled so not running JSON configuration tests 00:05:41.150 23:11:04 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:41.150 23:11:04 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:41.150 23:11:04 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:41.150 23:11:04 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:41.150 23:11:04 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:41.150 23:11:04 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:41.150 ************************************ 00:05:41.150 END TEST json_config 00:05:41.150 ************************************ 00:05:41.150 00:05:41.150 real 0m0.148s 00:05:41.150 user 0m0.089s 00:05:41.150 sys 0m0.055s 00:05:41.150 23:11:04 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.150 23:11:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.150 23:11:04 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:41.150 23:11:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.150 23:11:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.150 23:11:04 -- common/autotest_common.sh@10 -- # set +x 00:05:41.150 ************************************ 00:05:41.150 START TEST json_config_extra_key 00:05:41.150 ************************************ 00:05:41.150 23:11:04 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:41.150 23:11:04 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:41.150 23:11:04 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:41.150 23:11:04 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:41.408 23:11:05 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:41.408 23:11:05 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.408 23:11:05 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.408 23:11:05 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.408 23:11:05 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.408 23:11:05 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.408 23:11:05 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.408 23:11:05 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.408 23:11:05 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.408 23:11:05 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.408 23:11:05 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.408 23:11:05 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.408 23:11:05 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:41.408 23:11:05 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:41.408 23:11:05 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.408 23:11:05 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.408 23:11:05 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:41.408 23:11:05 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:41.408 23:11:05 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.408 23:11:05 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:41.408 23:11:05 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.408 23:11:05 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:41.408 23:11:05 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:41.408 23:11:05 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.408 23:11:05 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:41.408 23:11:05 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.408 23:11:05 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.408 23:11:05 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.408 23:11:05 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:41.408 23:11:05 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.408 23:11:05 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:41.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.408 --rc genhtml_branch_coverage=1 00:05:41.408 --rc genhtml_function_coverage=1 00:05:41.408 --rc genhtml_legend=1 00:05:41.408 --rc geninfo_all_blocks=1 00:05:41.408 --rc geninfo_unexecuted_blocks=1 00:05:41.408 00:05:41.408 ' 00:05:41.408 23:11:05 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:41.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.408 --rc genhtml_branch_coverage=1 00:05:41.408 --rc genhtml_function_coverage=1 00:05:41.408 --rc genhtml_legend=1 00:05:41.408 --rc geninfo_all_blocks=1 00:05:41.408 --rc geninfo_unexecuted_blocks=1 00:05:41.408 00:05:41.408 ' 00:05:41.408 23:11:05 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:41.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.408 --rc genhtml_branch_coverage=1 00:05:41.408 --rc genhtml_function_coverage=1 00:05:41.408 --rc genhtml_legend=1 00:05:41.408 --rc geninfo_all_blocks=1 00:05:41.408 --rc geninfo_unexecuted_blocks=1 00:05:41.408 00:05:41.408 ' 00:05:41.408 23:11:05 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:41.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.408 --rc genhtml_branch_coverage=1 00:05:41.408 --rc genhtml_function_coverage=1 00:05:41.408 --rc genhtml_legend=1 00:05:41.408 --rc geninfo_all_blocks=1 00:05:41.408 --rc geninfo_unexecuted_blocks=1 00:05:41.408 00:05:41.408 ' 00:05:41.408 23:11:05 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:41.408 23:11:05 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:41.408 23:11:05 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:41.408 23:11:05 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:41.408 23:11:05 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:41.408 23:11:05 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:41.408 23:11:05 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:41.408 23:11:05 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:41.408 23:11:05 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:41.408 23:11:05 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:41.408 23:11:05 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:41.408 23:11:05 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:41.408 23:11:05 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5eabce85-047d-4605-a6cf-5b958243ebf4 00:05:41.408 23:11:05 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5eabce85-047d-4605-a6cf-5b958243ebf4 00:05:41.408 23:11:05 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:41.408 23:11:05 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:41.408 23:11:05 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:41.408 23:11:05 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:41.408 23:11:05 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:41.408 23:11:05 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:41.408 23:11:05 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.408 23:11:05 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.408 23:11:05 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.408 23:11:05 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.408 23:11:05 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.408 23:11:05 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.408 23:11:05 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:41.408 23:11:05 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.408 23:11:05 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:41.408 23:11:05 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:41.408 23:11:05 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:41.408 23:11:05 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:41.408 23:11:05 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:41.408 23:11:05 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:41.408 23:11:05 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:41.408 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:41.408 23:11:05 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:41.408 23:11:05 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:41.408 23:11:05 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:41.408 23:11:05 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:41.408 23:11:05 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:41.408 23:11:05 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:41.408 23:11:05 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:41.408 23:11:05 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:41.409 23:11:05 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:41.409 23:11:05 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:41.409 INFO: launching applications... 00:05:41.409 23:11:05 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:41.409 23:11:05 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:41.409 23:11:05 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:41.409 23:11:05 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:41.409 23:11:05 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:41.409 23:11:05 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:41.409 23:11:05 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:41.409 23:11:05 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:41.409 23:11:05 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:41.409 23:11:05 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:41.409 23:11:05 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.409 23:11:05 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.409 23:11:05 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=69726 00:05:41.409 Waiting for target to run... 00:05:41.409 23:11:05 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:41.409 23:11:05 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 69726 /var/tmp/spdk_tgt.sock 00:05:41.409 23:11:05 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 69726 ']' 00:05:41.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:41.409 23:11:05 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:41.409 23:11:05 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.409 23:11:05 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:41.409 23:11:05 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.409 23:11:05 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:41.409 23:11:05 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:41.409 [2024-11-17 23:11:05.122864] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:41.409 [2024-11-17 23:11:05.123031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69726 ] 00:05:41.666 [2024-11-17 23:11:05.421637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.666 [2024-11-17 23:11:05.433047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.233 23:11:05 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.233 23:11:05 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:42.233 00:05:42.233 INFO: shutting down applications... 00:05:42.233 23:11:05 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:42.233 23:11:05 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:42.233 23:11:05 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:42.233 23:11:05 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:42.233 23:11:05 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:42.233 23:11:05 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 69726 ]] 00:05:42.233 23:11:05 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 69726 00:05:42.233 23:11:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:42.233 23:11:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:42.233 23:11:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69726 00:05:42.233 23:11:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:42.798 23:11:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:42.798 23:11:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:42.798 23:11:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69726 00:05:42.798 23:11:06 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:42.798 23:11:06 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:42.798 SPDK target shutdown done 00:05:42.798 23:11:06 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:42.798 23:11:06 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:42.798 Success 00:05:42.798 23:11:06 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:42.798 00:05:42.798 real 0m1.558s 00:05:42.798 user 0m1.262s 00:05:42.798 sys 0m0.349s 00:05:42.798 23:11:06 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.798 23:11:06 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:42.798 ************************************ 00:05:42.798 END TEST json_config_extra_key 00:05:42.798 ************************************ 00:05:42.798 23:11:06 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:42.798 23:11:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.798 23:11:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.798 23:11:06 -- common/autotest_common.sh@10 -- # set +x 00:05:42.798 ************************************ 00:05:42.798 START TEST alias_rpc 00:05:42.798 ************************************ 00:05:42.798 23:11:06 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:42.798 * Looking for test storage... 00:05:42.798 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:42.798 23:11:06 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:42.798 23:11:06 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:42.798 23:11:06 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:43.056 23:11:06 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:43.056 23:11:06 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.056 23:11:06 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.056 23:11:06 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.056 23:11:06 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.056 23:11:06 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.056 23:11:06 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.056 23:11:06 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.056 23:11:06 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.056 23:11:06 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.056 23:11:06 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.056 23:11:06 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.056 23:11:06 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:43.056 23:11:06 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:43.056 23:11:06 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.056 23:11:06 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.056 23:11:06 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:43.056 23:11:06 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:43.056 23:11:06 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.056 23:11:06 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:43.056 23:11:06 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.056 23:11:06 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:43.056 23:11:06 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:43.056 23:11:06 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.056 23:11:06 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:43.056 23:11:06 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.056 23:11:06 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.056 23:11:06 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.056 23:11:06 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:43.056 23:11:06 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.056 23:11:06 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:43.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.056 --rc genhtml_branch_coverage=1 00:05:43.056 --rc genhtml_function_coverage=1 00:05:43.056 --rc genhtml_legend=1 00:05:43.056 --rc geninfo_all_blocks=1 00:05:43.056 --rc geninfo_unexecuted_blocks=1 00:05:43.056 00:05:43.056 ' 00:05:43.056 23:11:06 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:43.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.056 --rc genhtml_branch_coverage=1 00:05:43.056 --rc genhtml_function_coverage=1 00:05:43.056 --rc genhtml_legend=1 00:05:43.056 --rc geninfo_all_blocks=1 00:05:43.056 --rc geninfo_unexecuted_blocks=1 00:05:43.056 00:05:43.056 ' 00:05:43.056 23:11:06 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:43.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.056 --rc genhtml_branch_coverage=1 00:05:43.056 --rc genhtml_function_coverage=1 00:05:43.056 --rc genhtml_legend=1 00:05:43.056 --rc geninfo_all_blocks=1 00:05:43.056 --rc geninfo_unexecuted_blocks=1 00:05:43.056 00:05:43.056 ' 00:05:43.056 23:11:06 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:43.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.057 --rc genhtml_branch_coverage=1 00:05:43.057 --rc genhtml_function_coverage=1 00:05:43.057 --rc genhtml_legend=1 00:05:43.057 --rc geninfo_all_blocks=1 00:05:43.057 --rc geninfo_unexecuted_blocks=1 00:05:43.057 00:05:43.057 ' 00:05:43.057 23:11:06 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:43.057 23:11:06 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=69800 00:05:43.057 23:11:06 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 69800 00:05:43.057 23:11:06 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 69800 ']' 00:05:43.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.057 23:11:06 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.057 23:11:06 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:43.057 23:11:06 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.057 23:11:06 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.057 23:11:06 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.057 23:11:06 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.057 [2024-11-17 23:11:06.763585] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:43.057 [2024-11-17 23:11:06.763717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69800 ] 00:05:43.316 [2024-11-17 23:11:06.912143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.316 [2024-11-17 23:11:06.932790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.887 23:11:07 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.887 23:11:07 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:43.887 23:11:07 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:44.147 23:11:07 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 69800 00:05:44.147 23:11:07 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 69800 ']' 00:05:44.147 23:11:07 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 69800 00:05:44.147 23:11:07 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:44.147 23:11:07 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:44.147 23:11:07 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69800 00:05:44.147 23:11:07 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:44.147 killing process with pid 69800 00:05:44.147 23:11:07 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:44.147 23:11:07 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69800' 00:05:44.147 23:11:07 alias_rpc -- common/autotest_common.sh@973 -- # kill 69800 00:05:44.147 23:11:07 alias_rpc -- common/autotest_common.sh@978 -- # wait 69800 00:05:44.407 00:05:44.407 real 0m1.628s 00:05:44.407 user 0m1.726s 00:05:44.407 sys 0m0.416s 00:05:44.407 23:11:08 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.407 23:11:08 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.407 ************************************ 00:05:44.407 END TEST alias_rpc 00:05:44.407 ************************************ 00:05:44.407 23:11:08 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:44.407 23:11:08 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:44.407 23:11:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.407 23:11:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.407 23:11:08 -- common/autotest_common.sh@10 -- # set +x 00:05:44.668 ************************************ 00:05:44.668 START TEST spdkcli_tcp 00:05:44.668 ************************************ 00:05:44.668 23:11:08 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:44.668 * Looking for test storage... 00:05:44.668 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:44.668 23:11:08 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:44.668 23:11:08 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:44.668 23:11:08 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:44.668 23:11:08 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:44.668 23:11:08 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.668 23:11:08 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.668 23:11:08 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.668 23:11:08 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.668 23:11:08 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.668 23:11:08 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.668 23:11:08 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.668 23:11:08 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.668 23:11:08 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.668 23:11:08 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.668 23:11:08 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.668 23:11:08 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:44.668 23:11:08 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:44.668 23:11:08 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.668 23:11:08 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.668 23:11:08 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:44.668 23:11:08 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:44.668 23:11:08 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.668 23:11:08 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:44.668 23:11:08 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.668 23:11:08 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:44.668 23:11:08 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:44.668 23:11:08 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.668 23:11:08 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:44.668 23:11:08 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.668 23:11:08 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.668 23:11:08 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.668 23:11:08 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:44.668 23:11:08 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.668 23:11:08 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:44.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.668 --rc genhtml_branch_coverage=1 00:05:44.668 --rc genhtml_function_coverage=1 00:05:44.668 --rc genhtml_legend=1 00:05:44.668 --rc geninfo_all_blocks=1 00:05:44.668 --rc geninfo_unexecuted_blocks=1 00:05:44.668 00:05:44.668 ' 00:05:44.668 23:11:08 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:44.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.668 --rc genhtml_branch_coverage=1 00:05:44.668 --rc genhtml_function_coverage=1 00:05:44.668 --rc genhtml_legend=1 00:05:44.668 --rc geninfo_all_blocks=1 00:05:44.668 --rc geninfo_unexecuted_blocks=1 00:05:44.668 00:05:44.668 ' 00:05:44.668 23:11:08 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:44.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.668 --rc genhtml_branch_coverage=1 00:05:44.668 --rc genhtml_function_coverage=1 00:05:44.668 --rc genhtml_legend=1 00:05:44.668 --rc geninfo_all_blocks=1 00:05:44.668 --rc geninfo_unexecuted_blocks=1 00:05:44.668 00:05:44.668 ' 00:05:44.668 23:11:08 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:44.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.668 --rc genhtml_branch_coverage=1 00:05:44.668 --rc genhtml_function_coverage=1 00:05:44.668 --rc genhtml_legend=1 00:05:44.668 --rc geninfo_all_blocks=1 00:05:44.668 --rc geninfo_unexecuted_blocks=1 00:05:44.668 00:05:44.668 ' 00:05:44.668 23:11:08 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:44.668 23:11:08 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:44.668 23:11:08 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:44.668 23:11:08 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:44.668 23:11:08 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:44.668 23:11:08 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:44.668 23:11:08 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:44.668 23:11:08 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:44.668 23:11:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:44.668 23:11:08 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=69879 00:05:44.668 23:11:08 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 69879 00:05:44.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.668 23:11:08 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 69879 ']' 00:05:44.668 23:11:08 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.668 23:11:08 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.668 23:11:08 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.668 23:11:08 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.668 23:11:08 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:44.668 23:11:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:44.668 [2024-11-17 23:11:08.473509] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:44.669 [2024-11-17 23:11:08.473654] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69879 ] 00:05:44.928 [2024-11-17 23:11:08.619202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:44.928 [2024-11-17 23:11:08.649812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.928 [2024-11-17 23:11:08.649874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.866 23:11:09 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.866 23:11:09 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:45.866 23:11:09 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=69896 00:05:45.866 23:11:09 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:45.866 23:11:09 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:45.866 [ 00:05:45.866 "bdev_malloc_delete", 00:05:45.866 "bdev_malloc_create", 00:05:45.866 "bdev_null_resize", 00:05:45.866 "bdev_null_delete", 00:05:45.866 "bdev_null_create", 00:05:45.866 "bdev_nvme_cuse_unregister", 00:05:45.866 "bdev_nvme_cuse_register", 00:05:45.866 "bdev_opal_new_user", 00:05:45.866 "bdev_opal_set_lock_state", 00:05:45.866 "bdev_opal_delete", 00:05:45.866 "bdev_opal_get_info", 00:05:45.866 "bdev_opal_create", 00:05:45.866 "bdev_nvme_opal_revert", 00:05:45.866 "bdev_nvme_opal_init", 00:05:45.866 "bdev_nvme_send_cmd", 00:05:45.866 "bdev_nvme_set_keys", 00:05:45.866 "bdev_nvme_get_path_iostat", 00:05:45.866 "bdev_nvme_get_mdns_discovery_info", 00:05:45.866 "bdev_nvme_stop_mdns_discovery", 00:05:45.866 "bdev_nvme_start_mdns_discovery", 00:05:45.866 "bdev_nvme_set_multipath_policy", 00:05:45.866 "bdev_nvme_set_preferred_path", 00:05:45.866 "bdev_nvme_get_io_paths", 00:05:45.866 "bdev_nvme_remove_error_injection", 00:05:45.866 "bdev_nvme_add_error_injection", 00:05:45.866 "bdev_nvme_get_discovery_info", 00:05:45.866 "bdev_nvme_stop_discovery", 00:05:45.866 "bdev_nvme_start_discovery", 00:05:45.866 "bdev_nvme_get_controller_health_info", 00:05:45.866 "bdev_nvme_disable_controller", 00:05:45.866 "bdev_nvme_enable_controller", 00:05:45.866 "bdev_nvme_reset_controller", 00:05:45.866 "bdev_nvme_get_transport_statistics", 00:05:45.866 "bdev_nvme_apply_firmware", 00:05:45.866 "bdev_nvme_detach_controller", 00:05:45.866 "bdev_nvme_get_controllers", 00:05:45.866 "bdev_nvme_attach_controller", 00:05:45.866 "bdev_nvme_set_hotplug", 00:05:45.866 "bdev_nvme_set_options", 00:05:45.866 "bdev_passthru_delete", 00:05:45.866 "bdev_passthru_create", 00:05:45.866 "bdev_lvol_set_parent_bdev", 00:05:45.866 "bdev_lvol_set_parent", 00:05:45.866 "bdev_lvol_check_shallow_copy", 00:05:45.866 "bdev_lvol_start_shallow_copy", 00:05:45.866 "bdev_lvol_grow_lvstore", 00:05:45.866 "bdev_lvol_get_lvols", 00:05:45.866 "bdev_lvol_get_lvstores", 00:05:45.866 "bdev_lvol_delete", 00:05:45.866 "bdev_lvol_set_read_only", 00:05:45.866 "bdev_lvol_resize", 00:05:45.866 "bdev_lvol_decouple_parent", 00:05:45.866 "bdev_lvol_inflate", 00:05:45.866 "bdev_lvol_rename", 00:05:45.866 "bdev_lvol_clone_bdev", 00:05:45.866 "bdev_lvol_clone", 00:05:45.866 "bdev_lvol_snapshot", 00:05:45.866 "bdev_lvol_create", 00:05:45.866 "bdev_lvol_delete_lvstore", 00:05:45.866 "bdev_lvol_rename_lvstore", 00:05:45.866 "bdev_lvol_create_lvstore", 00:05:45.866 "bdev_raid_set_options", 00:05:45.866 "bdev_raid_remove_base_bdev", 00:05:45.866 "bdev_raid_add_base_bdev", 00:05:45.866 "bdev_raid_delete", 00:05:45.866 "bdev_raid_create", 00:05:45.866 "bdev_raid_get_bdevs", 00:05:45.866 "bdev_error_inject_error", 00:05:45.866 "bdev_error_delete", 00:05:45.866 "bdev_error_create", 00:05:45.866 "bdev_split_delete", 00:05:45.866 "bdev_split_create", 00:05:45.866 "bdev_delay_delete", 00:05:45.866 "bdev_delay_create", 00:05:45.866 "bdev_delay_update_latency", 00:05:45.866 "bdev_zone_block_delete", 00:05:45.866 "bdev_zone_block_create", 00:05:45.866 "blobfs_create", 00:05:45.866 "blobfs_detect", 00:05:45.866 "blobfs_set_cache_size", 00:05:45.866 "bdev_xnvme_delete", 00:05:45.866 "bdev_xnvme_create", 00:05:45.866 "bdev_aio_delete", 00:05:45.866 "bdev_aio_rescan", 00:05:45.866 "bdev_aio_create", 00:05:45.866 "bdev_ftl_set_property", 00:05:45.866 "bdev_ftl_get_properties", 00:05:45.866 "bdev_ftl_get_stats", 00:05:45.866 "bdev_ftl_unmap", 00:05:45.866 "bdev_ftl_unload", 00:05:45.866 "bdev_ftl_delete", 00:05:45.866 "bdev_ftl_load", 00:05:45.866 "bdev_ftl_create", 00:05:45.866 "bdev_virtio_attach_controller", 00:05:45.866 "bdev_virtio_scsi_get_devices", 00:05:45.866 "bdev_virtio_detach_controller", 00:05:45.866 "bdev_virtio_blk_set_hotplug", 00:05:45.866 "bdev_iscsi_delete", 00:05:45.866 "bdev_iscsi_create", 00:05:45.866 "bdev_iscsi_set_options", 00:05:45.866 "accel_error_inject_error", 00:05:45.866 "ioat_scan_accel_module", 00:05:45.866 "dsa_scan_accel_module", 00:05:45.866 "iaa_scan_accel_module", 00:05:45.866 "keyring_file_remove_key", 00:05:45.866 "keyring_file_add_key", 00:05:45.866 "keyring_linux_set_options", 00:05:45.866 "fsdev_aio_delete", 00:05:45.866 "fsdev_aio_create", 00:05:45.866 "iscsi_get_histogram", 00:05:45.866 "iscsi_enable_histogram", 00:05:45.866 "iscsi_set_options", 00:05:45.866 "iscsi_get_auth_groups", 00:05:45.866 "iscsi_auth_group_remove_secret", 00:05:45.866 "iscsi_auth_group_add_secret", 00:05:45.866 "iscsi_delete_auth_group", 00:05:45.866 "iscsi_create_auth_group", 00:05:45.866 "iscsi_set_discovery_auth", 00:05:45.866 "iscsi_get_options", 00:05:45.866 "iscsi_target_node_request_logout", 00:05:45.866 "iscsi_target_node_set_redirect", 00:05:45.866 "iscsi_target_node_set_auth", 00:05:45.866 "iscsi_target_node_add_lun", 00:05:45.866 "iscsi_get_stats", 00:05:45.866 "iscsi_get_connections", 00:05:45.866 "iscsi_portal_group_set_auth", 00:05:45.866 "iscsi_start_portal_group", 00:05:45.866 "iscsi_delete_portal_group", 00:05:45.866 "iscsi_create_portal_group", 00:05:45.866 "iscsi_get_portal_groups", 00:05:45.866 "iscsi_delete_target_node", 00:05:45.866 "iscsi_target_node_remove_pg_ig_maps", 00:05:45.866 "iscsi_target_node_add_pg_ig_maps", 00:05:45.866 "iscsi_create_target_node", 00:05:45.866 "iscsi_get_target_nodes", 00:05:45.866 "iscsi_delete_initiator_group", 00:05:45.866 "iscsi_initiator_group_remove_initiators", 00:05:45.866 "iscsi_initiator_group_add_initiators", 00:05:45.866 "iscsi_create_initiator_group", 00:05:45.866 "iscsi_get_initiator_groups", 00:05:45.866 "nvmf_set_crdt", 00:05:45.866 "nvmf_set_config", 00:05:45.866 "nvmf_set_max_subsystems", 00:05:45.866 "nvmf_stop_mdns_prr", 00:05:45.866 "nvmf_publish_mdns_prr", 00:05:45.866 "nvmf_subsystem_get_listeners", 00:05:45.866 "nvmf_subsystem_get_qpairs", 00:05:45.866 "nvmf_subsystem_get_controllers", 00:05:45.866 "nvmf_get_stats", 00:05:45.866 "nvmf_get_transports", 00:05:45.866 "nvmf_create_transport", 00:05:45.866 "nvmf_get_targets", 00:05:45.866 "nvmf_delete_target", 00:05:45.866 "nvmf_create_target", 00:05:45.866 "nvmf_subsystem_allow_any_host", 00:05:45.866 "nvmf_subsystem_set_keys", 00:05:45.866 "nvmf_subsystem_remove_host", 00:05:45.866 "nvmf_subsystem_add_host", 00:05:45.866 "nvmf_ns_remove_host", 00:05:45.866 "nvmf_ns_add_host", 00:05:45.866 "nvmf_subsystem_remove_ns", 00:05:45.866 "nvmf_subsystem_set_ns_ana_group", 00:05:45.866 "nvmf_subsystem_add_ns", 00:05:45.866 "nvmf_subsystem_listener_set_ana_state", 00:05:45.866 "nvmf_discovery_get_referrals", 00:05:45.866 "nvmf_discovery_remove_referral", 00:05:45.866 "nvmf_discovery_add_referral", 00:05:45.866 "nvmf_subsystem_remove_listener", 00:05:45.866 "nvmf_subsystem_add_listener", 00:05:45.866 "nvmf_delete_subsystem", 00:05:45.866 "nvmf_create_subsystem", 00:05:45.866 "nvmf_get_subsystems", 00:05:45.866 "env_dpdk_get_mem_stats", 00:05:45.867 "nbd_get_disks", 00:05:45.867 "nbd_stop_disk", 00:05:45.867 "nbd_start_disk", 00:05:45.867 "ublk_recover_disk", 00:05:45.867 "ublk_get_disks", 00:05:45.867 "ublk_stop_disk", 00:05:45.867 "ublk_start_disk", 00:05:45.867 "ublk_destroy_target", 00:05:45.867 "ublk_create_target", 00:05:45.867 "virtio_blk_create_transport", 00:05:45.867 "virtio_blk_get_transports", 00:05:45.867 "vhost_controller_set_coalescing", 00:05:45.867 "vhost_get_controllers", 00:05:45.867 "vhost_delete_controller", 00:05:45.867 "vhost_create_blk_controller", 00:05:45.867 "vhost_scsi_controller_remove_target", 00:05:45.867 "vhost_scsi_controller_add_target", 00:05:45.867 "vhost_start_scsi_controller", 00:05:45.867 "vhost_create_scsi_controller", 00:05:45.867 "thread_set_cpumask", 00:05:45.867 "scheduler_set_options", 00:05:45.867 "framework_get_governor", 00:05:45.867 "framework_get_scheduler", 00:05:45.867 "framework_set_scheduler", 00:05:45.867 "framework_get_reactors", 00:05:45.867 "thread_get_io_channels", 00:05:45.867 "thread_get_pollers", 00:05:45.867 "thread_get_stats", 00:05:45.867 "framework_monitor_context_switch", 00:05:45.867 "spdk_kill_instance", 00:05:45.867 "log_enable_timestamps", 00:05:45.867 "log_get_flags", 00:05:45.867 "log_clear_flag", 00:05:45.867 "log_set_flag", 00:05:45.867 "log_get_level", 00:05:45.867 "log_set_level", 00:05:45.867 "log_get_print_level", 00:05:45.867 "log_set_print_level", 00:05:45.867 "framework_enable_cpumask_locks", 00:05:45.867 "framework_disable_cpumask_locks", 00:05:45.867 "framework_wait_init", 00:05:45.867 "framework_start_init", 00:05:45.867 "scsi_get_devices", 00:05:45.867 "bdev_get_histogram", 00:05:45.867 "bdev_enable_histogram", 00:05:45.867 "bdev_set_qos_limit", 00:05:45.867 "bdev_set_qd_sampling_period", 00:05:45.867 "bdev_get_bdevs", 00:05:45.867 "bdev_reset_iostat", 00:05:45.867 "bdev_get_iostat", 00:05:45.867 "bdev_examine", 00:05:45.867 "bdev_wait_for_examine", 00:05:45.867 "bdev_set_options", 00:05:45.867 "accel_get_stats", 00:05:45.867 "accel_set_options", 00:05:45.867 "accel_set_driver", 00:05:45.867 "accel_crypto_key_destroy", 00:05:45.867 "accel_crypto_keys_get", 00:05:45.867 "accel_crypto_key_create", 00:05:45.867 "accel_assign_opc", 00:05:45.867 "accel_get_module_info", 00:05:45.867 "accel_get_opc_assignments", 00:05:45.867 "vmd_rescan", 00:05:45.867 "vmd_remove_device", 00:05:45.867 "vmd_enable", 00:05:45.867 "sock_get_default_impl", 00:05:45.867 "sock_set_default_impl", 00:05:45.867 "sock_impl_set_options", 00:05:45.867 "sock_impl_get_options", 00:05:45.867 "iobuf_get_stats", 00:05:45.867 "iobuf_set_options", 00:05:45.867 "keyring_get_keys", 00:05:45.867 "framework_get_pci_devices", 00:05:45.867 "framework_get_config", 00:05:45.867 "framework_get_subsystems", 00:05:45.867 "fsdev_set_opts", 00:05:45.867 "fsdev_get_opts", 00:05:45.867 "trace_get_info", 00:05:45.867 "trace_get_tpoint_group_mask", 00:05:45.867 "trace_disable_tpoint_group", 00:05:45.867 "trace_enable_tpoint_group", 00:05:45.867 "trace_clear_tpoint_mask", 00:05:45.867 "trace_set_tpoint_mask", 00:05:45.867 "notify_get_notifications", 00:05:45.867 "notify_get_types", 00:05:45.867 "spdk_get_version", 00:05:45.867 "rpc_get_methods" 00:05:45.867 ] 00:05:45.867 23:11:09 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:45.867 23:11:09 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:45.867 23:11:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:45.867 23:11:09 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:45.867 23:11:09 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 69879 00:05:45.867 23:11:09 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 69879 ']' 00:05:45.867 23:11:09 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 69879 00:05:45.867 23:11:09 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:45.867 23:11:09 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.867 23:11:09 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69879 00:05:45.867 23:11:09 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.867 23:11:09 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.867 23:11:09 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69879' 00:05:45.867 killing process with pid 69879 00:05:45.867 23:11:09 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 69879 00:05:45.867 23:11:09 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 69879 00:05:46.128 00:05:46.128 real 0m1.704s 00:05:46.128 user 0m2.990s 00:05:46.128 sys 0m0.487s 00:05:46.128 23:11:09 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.128 ************************************ 00:05:46.128 END TEST spdkcli_tcp 00:05:46.128 ************************************ 00:05:46.128 23:11:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:46.389 23:11:09 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:46.389 23:11:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.389 23:11:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.389 23:11:09 -- common/autotest_common.sh@10 -- # set +x 00:05:46.389 ************************************ 00:05:46.389 START TEST dpdk_mem_utility 00:05:46.389 ************************************ 00:05:46.389 23:11:10 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:46.389 * Looking for test storage... 00:05:46.389 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:46.389 23:11:10 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:46.389 23:11:10 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:46.389 23:11:10 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:46.389 23:11:10 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:46.389 23:11:10 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.389 23:11:10 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.389 23:11:10 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.389 23:11:10 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.389 23:11:10 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.389 23:11:10 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.389 23:11:10 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.389 23:11:10 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.389 23:11:10 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.389 23:11:10 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.389 23:11:10 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.389 23:11:10 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:46.389 23:11:10 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:46.389 23:11:10 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.389 23:11:10 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.389 23:11:10 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:46.389 23:11:10 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:46.389 23:11:10 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.389 23:11:10 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:46.389 23:11:10 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.389 23:11:10 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:46.389 23:11:10 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:46.389 23:11:10 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.389 23:11:10 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:46.389 23:11:10 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.389 23:11:10 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.389 23:11:10 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.389 23:11:10 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:46.389 23:11:10 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.389 23:11:10 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:46.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.389 --rc genhtml_branch_coverage=1 00:05:46.389 --rc genhtml_function_coverage=1 00:05:46.389 --rc genhtml_legend=1 00:05:46.389 --rc geninfo_all_blocks=1 00:05:46.389 --rc geninfo_unexecuted_blocks=1 00:05:46.389 00:05:46.389 ' 00:05:46.390 23:11:10 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:46.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.390 --rc genhtml_branch_coverage=1 00:05:46.390 --rc genhtml_function_coverage=1 00:05:46.390 --rc genhtml_legend=1 00:05:46.390 --rc geninfo_all_blocks=1 00:05:46.390 --rc geninfo_unexecuted_blocks=1 00:05:46.390 00:05:46.390 ' 00:05:46.390 23:11:10 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:46.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.390 --rc genhtml_branch_coverage=1 00:05:46.390 --rc genhtml_function_coverage=1 00:05:46.390 --rc genhtml_legend=1 00:05:46.390 --rc geninfo_all_blocks=1 00:05:46.390 --rc geninfo_unexecuted_blocks=1 00:05:46.390 00:05:46.390 ' 00:05:46.390 23:11:10 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:46.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.390 --rc genhtml_branch_coverage=1 00:05:46.390 --rc genhtml_function_coverage=1 00:05:46.390 --rc genhtml_legend=1 00:05:46.390 --rc geninfo_all_blocks=1 00:05:46.390 --rc geninfo_unexecuted_blocks=1 00:05:46.390 00:05:46.390 ' 00:05:46.390 23:11:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:46.390 23:11:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=69979 00:05:46.390 23:11:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 69979 00:05:46.390 23:11:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:46.390 23:11:10 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 69979 ']' 00:05:46.390 23:11:10 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.390 23:11:10 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.390 23:11:10 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.390 23:11:10 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.390 23:11:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:46.648 [2024-11-17 23:11:10.223354] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:46.649 [2024-11-17 23:11:10.223477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69979 ] 00:05:46.649 [2024-11-17 23:11:10.369676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.649 [2024-11-17 23:11:10.398601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.586 23:11:11 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.586 23:11:11 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:47.586 23:11:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:47.586 23:11:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:47.586 23:11:11 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.586 23:11:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:47.586 { 00:05:47.586 "filename": "/tmp/spdk_mem_dump.txt" 00:05:47.586 } 00:05:47.586 23:11:11 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.586 23:11:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:47.586 DPDK memory size 810.000000 MiB in 1 heap(s) 00:05:47.586 1 heaps totaling size 810.000000 MiB 00:05:47.586 size: 810.000000 MiB heap id: 0 00:05:47.586 end heaps---------- 00:05:47.586 9 mempools totaling size 595.772034 MiB 00:05:47.586 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:47.586 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:47.586 size: 92.545471 MiB name: bdev_io_69979 00:05:47.586 size: 50.003479 MiB name: msgpool_69979 00:05:47.586 size: 36.509338 MiB name: fsdev_io_69979 00:05:47.586 size: 21.763794 MiB name: PDU_Pool 00:05:47.586 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:47.586 size: 4.133484 MiB name: evtpool_69979 00:05:47.586 size: 0.026123 MiB name: Session_Pool 00:05:47.586 end mempools------- 00:05:47.586 6 memzones totaling size 4.142822 MiB 00:05:47.586 size: 1.000366 MiB name: RG_ring_0_69979 00:05:47.586 size: 1.000366 MiB name: RG_ring_1_69979 00:05:47.586 size: 1.000366 MiB name: RG_ring_4_69979 00:05:47.586 size: 1.000366 MiB name: RG_ring_5_69979 00:05:47.586 size: 0.125366 MiB name: RG_ring_2_69979 00:05:47.586 size: 0.015991 MiB name: RG_ring_3_69979 00:05:47.586 end memzones------- 00:05:47.586 23:11:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:47.586 heap id: 0 total size: 810.000000 MiB number of busy elements: 315 number of free elements: 15 00:05:47.586 list of free elements. size: 10.812866 MiB 00:05:47.586 element at address: 0x200018a00000 with size: 0.999878 MiB 00:05:47.586 element at address: 0x200018c00000 with size: 0.999878 MiB 00:05:47.586 element at address: 0x200031800000 with size: 0.994446 MiB 00:05:47.586 element at address: 0x200000400000 with size: 0.993958 MiB 00:05:47.586 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:47.586 element at address: 0x200012c00000 with size: 0.954285 MiB 00:05:47.586 element at address: 0x200018e00000 with size: 0.936584 MiB 00:05:47.586 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:47.586 element at address: 0x20001a600000 with size: 0.567322 MiB 00:05:47.586 element at address: 0x20000a600000 with size: 0.488892 MiB 00:05:47.586 element at address: 0x200000c00000 with size: 0.487000 MiB 00:05:47.586 element at address: 0x200019000000 with size: 0.485657 MiB 00:05:47.586 element at address: 0x200003e00000 with size: 0.480286 MiB 00:05:47.586 element at address: 0x200027a00000 with size: 0.395752 MiB 00:05:47.586 element at address: 0x200000800000 with size: 0.351746 MiB 00:05:47.586 list of standard malloc elements. size: 199.268250 MiB 00:05:47.586 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:47.586 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:47.586 element at address: 0x200018afff80 with size: 1.000122 MiB 00:05:47.586 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:05:47.586 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:47.586 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:47.586 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:05:47.586 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:47.586 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:05:47.586 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:47.586 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:47.586 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:05:47.586 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:05:47.586 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:05:47.586 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:05:47.586 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:05:47.586 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:05:47.586 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:05:47.586 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:05:47.586 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:05:47.586 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:05:47.586 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:05:47.586 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:05:47.586 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:05:47.586 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:05:47.586 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:05:47.586 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:05:47.586 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:05:47.586 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:05:47.586 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:05:47.586 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:05:47.586 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:05:47.586 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:05:47.586 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:05:47.586 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:05:47.586 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:05:47.586 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:47.586 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:47.586 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:05:47.586 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:47.586 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:47.586 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:05:47.586 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:05:47.586 element at address: 0x20000085e580 with size: 0.000183 MiB 00:05:47.586 element at address: 0x20000087e840 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20000087e900 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20000087f080 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20000087f140 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20000087f200 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20000087f380 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20000087f440 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20000087f500 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:47.587 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:47.587 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:47.587 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:05:47.587 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:05:47.587 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20001a6913c0 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20001a691480 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20001a691540 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20001a691600 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20001a6916c0 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20001a691780 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20001a691840 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20001a691900 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20001a6919c0 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20001a691a80 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20001a691b40 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20001a691c00 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20001a691cc0 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20001a691d80 with size: 0.000183 MiB 00:05:47.587 element at address: 0x20001a691e40 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a691f00 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a691fc0 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a692080 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a692140 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a692200 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a692380 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a692440 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a692500 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a692680 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a692740 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a692800 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a692980 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a693040 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a693100 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a693280 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a693340 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a693400 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a693580 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a693640 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a693700 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a693880 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a693940 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a694000 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a694180 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a694240 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a694300 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a694480 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a694540 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a694600 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a694780 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a694840 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a694900 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a695080 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a695140 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a695200 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a695380 with size: 0.000183 MiB 00:05:47.588 element at address: 0x20001a695440 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a65500 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a655c0 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6c1c0 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6c3c0 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6c480 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6c540 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6c600 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6c6c0 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6c780 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6c840 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6c900 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:05:47.588 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:05:47.589 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:05:47.589 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:05:47.589 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:05:47.589 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:05:47.589 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:05:47.589 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:05:47.589 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:05:47.589 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:05:47.589 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:05:47.589 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:05:47.589 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:05:47.589 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:05:47.589 list of memzone associated elements. size: 599.918884 MiB 00:05:47.589 element at address: 0x20001a695500 with size: 211.416748 MiB 00:05:47.589 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:47.589 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:05:47.589 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:47.589 element at address: 0x200012df4780 with size: 92.045044 MiB 00:05:47.589 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_69979_0 00:05:47.589 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:47.589 associated memzone info: size: 48.002930 MiB name: MP_msgpool_69979_0 00:05:47.589 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:47.589 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_69979_0 00:05:47.589 element at address: 0x2000191be940 with size: 20.255554 MiB 00:05:47.589 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:47.589 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:05:47.589 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:47.589 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:47.589 associated memzone info: size: 3.000122 MiB name: MP_evtpool_69979_0 00:05:47.589 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:47.589 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_69979 00:05:47.589 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:47.589 associated memzone info: size: 1.007996 MiB name: MP_evtpool_69979 00:05:47.589 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:47.589 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:47.589 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:05:47.589 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:47.589 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:47.589 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:47.589 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:47.589 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:47.589 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:47.589 associated memzone info: size: 1.000366 MiB name: RG_ring_0_69979 00:05:47.589 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:47.589 associated memzone info: size: 1.000366 MiB name: RG_ring_1_69979 00:05:47.589 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:05:47.589 associated memzone info: size: 1.000366 MiB name: RG_ring_4_69979 00:05:47.589 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:05:47.589 associated memzone info: size: 1.000366 MiB name: RG_ring_5_69979 00:05:47.589 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:47.589 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_69979 00:05:47.589 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:47.589 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_69979 00:05:47.589 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:47.589 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:47.589 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:47.589 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:47.589 element at address: 0x20001907c540 with size: 0.250488 MiB 00:05:47.589 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:47.589 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:47.589 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_69979 00:05:47.589 element at address: 0x20000085e640 with size: 0.125488 MiB 00:05:47.589 associated memzone info: size: 0.125366 MiB name: RG_ring_2_69979 00:05:47.589 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:47.589 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:47.589 element at address: 0x200027a65680 with size: 0.023743 MiB 00:05:47.589 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:47.589 element at address: 0x20000085a380 with size: 0.016113 MiB 00:05:47.589 associated memzone info: size: 0.015991 MiB name: RG_ring_3_69979 00:05:47.589 element at address: 0x200027a6b7c0 with size: 0.002441 MiB 00:05:47.589 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:47.589 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:05:47.589 associated memzone info: size: 0.000183 MiB name: MP_msgpool_69979 00:05:47.589 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:47.589 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_69979 00:05:47.589 element at address: 0x20000085a180 with size: 0.000305 MiB 00:05:47.589 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_69979 00:05:47.589 element at address: 0x200027a6c280 with size: 0.000305 MiB 00:05:47.589 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:47.589 23:11:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:47.589 23:11:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 69979 00:05:47.589 23:11:11 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 69979 ']' 00:05:47.589 23:11:11 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 69979 00:05:47.589 23:11:11 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:47.589 23:11:11 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.589 23:11:11 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69979 00:05:47.589 23:11:11 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.589 23:11:11 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.589 killing process with pid 69979 00:05:47.589 23:11:11 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69979' 00:05:47.589 23:11:11 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 69979 00:05:47.589 23:11:11 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 69979 00:05:47.847 00:05:47.847 real 0m1.479s 00:05:47.847 user 0m1.485s 00:05:47.847 sys 0m0.411s 00:05:47.847 ************************************ 00:05:47.847 END TEST dpdk_mem_utility 00:05:47.847 ************************************ 00:05:47.847 23:11:11 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.847 23:11:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:47.847 23:11:11 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:47.847 23:11:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.847 23:11:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.847 23:11:11 -- common/autotest_common.sh@10 -- # set +x 00:05:47.847 ************************************ 00:05:47.847 START TEST event 00:05:47.847 ************************************ 00:05:47.847 23:11:11 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:47.847 * Looking for test storage... 00:05:47.847 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:47.847 23:11:11 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:47.847 23:11:11 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:47.847 23:11:11 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:48.112 23:11:11 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:48.112 23:11:11 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.112 23:11:11 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.112 23:11:11 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.112 23:11:11 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.112 23:11:11 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.112 23:11:11 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.112 23:11:11 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.112 23:11:11 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.112 23:11:11 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.112 23:11:11 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.112 23:11:11 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.112 23:11:11 event -- scripts/common.sh@344 -- # case "$op" in 00:05:48.112 23:11:11 event -- scripts/common.sh@345 -- # : 1 00:05:48.112 23:11:11 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.112 23:11:11 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.112 23:11:11 event -- scripts/common.sh@365 -- # decimal 1 00:05:48.112 23:11:11 event -- scripts/common.sh@353 -- # local d=1 00:05:48.112 23:11:11 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.112 23:11:11 event -- scripts/common.sh@355 -- # echo 1 00:05:48.112 23:11:11 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.112 23:11:11 event -- scripts/common.sh@366 -- # decimal 2 00:05:48.112 23:11:11 event -- scripts/common.sh@353 -- # local d=2 00:05:48.112 23:11:11 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.112 23:11:11 event -- scripts/common.sh@355 -- # echo 2 00:05:48.112 23:11:11 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.112 23:11:11 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.112 23:11:11 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.112 23:11:11 event -- scripts/common.sh@368 -- # return 0 00:05:48.112 23:11:11 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.112 23:11:11 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:48.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.112 --rc genhtml_branch_coverage=1 00:05:48.112 --rc genhtml_function_coverage=1 00:05:48.112 --rc genhtml_legend=1 00:05:48.112 --rc geninfo_all_blocks=1 00:05:48.112 --rc geninfo_unexecuted_blocks=1 00:05:48.112 00:05:48.112 ' 00:05:48.112 23:11:11 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:48.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.112 --rc genhtml_branch_coverage=1 00:05:48.112 --rc genhtml_function_coverage=1 00:05:48.112 --rc genhtml_legend=1 00:05:48.112 --rc geninfo_all_blocks=1 00:05:48.112 --rc geninfo_unexecuted_blocks=1 00:05:48.112 00:05:48.112 ' 00:05:48.112 23:11:11 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:48.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.112 --rc genhtml_branch_coverage=1 00:05:48.112 --rc genhtml_function_coverage=1 00:05:48.112 --rc genhtml_legend=1 00:05:48.112 --rc geninfo_all_blocks=1 00:05:48.112 --rc geninfo_unexecuted_blocks=1 00:05:48.112 00:05:48.112 ' 00:05:48.112 23:11:11 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:48.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.112 --rc genhtml_branch_coverage=1 00:05:48.112 --rc genhtml_function_coverage=1 00:05:48.112 --rc genhtml_legend=1 00:05:48.113 --rc geninfo_all_blocks=1 00:05:48.113 --rc geninfo_unexecuted_blocks=1 00:05:48.113 00:05:48.113 ' 00:05:48.113 23:11:11 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:48.113 23:11:11 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:48.113 23:11:11 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:48.113 23:11:11 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:48.113 23:11:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.113 23:11:11 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.113 ************************************ 00:05:48.113 START TEST event_perf 00:05:48.113 ************************************ 00:05:48.113 23:11:11 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:48.113 Running I/O for 1 seconds...[2024-11-17 23:11:11.725958] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:48.113 [2024-11-17 23:11:11.726068] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70060 ] 00:05:48.113 [2024-11-17 23:11:11.864782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:48.113 [2024-11-17 23:11:11.886490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.113 [2024-11-17 23:11:11.886741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:48.113 Running I/O for 1 seconds...[2024-11-17 23:11:11.887069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:48.113 [2024-11-17 23:11:11.887145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.526 00:05:49.526 lcore 0: 189871 00:05:49.526 lcore 1: 189873 00:05:49.526 lcore 2: 189873 00:05:49.526 lcore 3: 189869 00:05:49.526 done. 00:05:49.526 00:05:49.526 real 0m1.245s 00:05:49.526 user 0m4.054s 00:05:49.526 sys 0m0.073s 00:05:49.526 23:11:12 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.526 23:11:12 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:49.526 ************************************ 00:05:49.526 END TEST event_perf 00:05:49.526 ************************************ 00:05:49.526 23:11:13 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:49.526 23:11:13 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:49.526 23:11:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.526 23:11:13 event -- common/autotest_common.sh@10 -- # set +x 00:05:49.526 ************************************ 00:05:49.526 START TEST event_reactor 00:05:49.526 ************************************ 00:05:49.526 23:11:13 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:49.526 [2024-11-17 23:11:13.043578] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:49.526 [2024-11-17 23:11:13.043699] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70094 ] 00:05:49.526 [2024-11-17 23:11:13.189423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.526 [2024-11-17 23:11:13.209956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.468 test_start 00:05:50.468 oneshot 00:05:50.468 tick 100 00:05:50.468 tick 100 00:05:50.468 tick 250 00:05:50.468 tick 100 00:05:50.468 tick 100 00:05:50.468 tick 250 00:05:50.468 tick 100 00:05:50.468 tick 500 00:05:50.468 tick 100 00:05:50.468 tick 100 00:05:50.468 tick 250 00:05:50.468 tick 100 00:05:50.468 tick 100 00:05:50.468 test_end 00:05:50.468 00:05:50.468 real 0m1.237s 00:05:50.468 user 0m1.072s 00:05:50.468 sys 0m0.056s 00:05:50.468 23:11:14 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.468 23:11:14 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:50.468 ************************************ 00:05:50.468 END TEST event_reactor 00:05:50.468 ************************************ 00:05:50.730 23:11:14 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:50.730 23:11:14 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:50.730 23:11:14 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.730 23:11:14 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.730 ************************************ 00:05:50.730 START TEST event_reactor_perf 00:05:50.730 ************************************ 00:05:50.730 23:11:14 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:50.730 [2024-11-17 23:11:14.342754] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:50.730 [2024-11-17 23:11:14.342863] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70130 ] 00:05:50.730 [2024-11-17 23:11:14.489321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.730 [2024-11-17 23:11:14.510949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.121 test_start 00:05:52.121 test_end 00:05:52.121 Performance: 309076 events per second 00:05:52.121 00:05:52.121 real 0m1.247s 00:05:52.121 user 0m1.083s 00:05:52.121 sys 0m0.053s 00:05:52.121 ************************************ 00:05:52.121 END TEST event_reactor_perf 00:05:52.121 ************************************ 00:05:52.121 23:11:15 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.121 23:11:15 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:52.121 23:11:15 event -- event/event.sh@49 -- # uname -s 00:05:52.121 23:11:15 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:52.121 23:11:15 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:52.121 23:11:15 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.121 23:11:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.121 23:11:15 event -- common/autotest_common.sh@10 -- # set +x 00:05:52.121 ************************************ 00:05:52.121 START TEST event_scheduler 00:05:52.121 ************************************ 00:05:52.121 23:11:15 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:52.121 * Looking for test storage... 00:05:52.121 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:52.121 23:11:15 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:52.121 23:11:15 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:52.121 23:11:15 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:52.121 23:11:15 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:52.121 23:11:15 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.121 23:11:15 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.121 23:11:15 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.121 23:11:15 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.121 23:11:15 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.121 23:11:15 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.121 23:11:15 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.121 23:11:15 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.121 23:11:15 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.121 23:11:15 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.121 23:11:15 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.121 23:11:15 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:52.121 23:11:15 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:52.121 23:11:15 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.121 23:11:15 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.121 23:11:15 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:52.121 23:11:15 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:52.121 23:11:15 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.122 23:11:15 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:52.122 23:11:15 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.122 23:11:15 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:52.122 23:11:15 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:52.122 23:11:15 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.122 23:11:15 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:52.122 23:11:15 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.122 23:11:15 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.122 23:11:15 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.122 23:11:15 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:52.122 23:11:15 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.122 23:11:15 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:52.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.122 --rc genhtml_branch_coverage=1 00:05:52.122 --rc genhtml_function_coverage=1 00:05:52.122 --rc genhtml_legend=1 00:05:52.122 --rc geninfo_all_blocks=1 00:05:52.122 --rc geninfo_unexecuted_blocks=1 00:05:52.122 00:05:52.122 ' 00:05:52.122 23:11:15 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:52.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.122 --rc genhtml_branch_coverage=1 00:05:52.122 --rc genhtml_function_coverage=1 00:05:52.122 --rc genhtml_legend=1 00:05:52.122 --rc geninfo_all_blocks=1 00:05:52.122 --rc geninfo_unexecuted_blocks=1 00:05:52.122 00:05:52.122 ' 00:05:52.122 23:11:15 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:52.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.122 --rc genhtml_branch_coverage=1 00:05:52.122 --rc genhtml_function_coverage=1 00:05:52.122 --rc genhtml_legend=1 00:05:52.122 --rc geninfo_all_blocks=1 00:05:52.122 --rc geninfo_unexecuted_blocks=1 00:05:52.122 00:05:52.122 ' 00:05:52.122 23:11:15 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:52.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.122 --rc genhtml_branch_coverage=1 00:05:52.122 --rc genhtml_function_coverage=1 00:05:52.122 --rc genhtml_legend=1 00:05:52.122 --rc geninfo_all_blocks=1 00:05:52.122 --rc geninfo_unexecuted_blocks=1 00:05:52.122 00:05:52.122 ' 00:05:52.122 23:11:15 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:52.122 23:11:15 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70201 00:05:52.122 23:11:15 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:52.122 23:11:15 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70201 00:05:52.122 23:11:15 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 70201 ']' 00:05:52.122 23:11:15 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.122 23:11:15 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.122 23:11:15 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.122 23:11:15 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.122 23:11:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:52.122 23:11:15 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:52.122 [2024-11-17 23:11:15.848438] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:52.122 [2024-11-17 23:11:15.848589] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70201 ] 00:05:52.383 [2024-11-17 23:11:15.994836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:52.383 [2024-11-17 23:11:16.022450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.383 [2024-11-17 23:11:16.022760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.383 [2024-11-17 23:11:16.023257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:52.383 [2024-11-17 23:11:16.023336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.955 23:11:16 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.955 23:11:16 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:52.955 23:11:16 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:52.955 23:11:16 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.955 23:11:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:52.955 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:52.955 POWER: Cannot set governor of lcore 0 to userspace 00:05:52.955 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:52.955 POWER: Cannot set governor of lcore 0 to performance 00:05:52.955 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:52.955 POWER: Cannot set governor of lcore 0 to userspace 00:05:52.955 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:52.955 POWER: Unable to set Power Management Environment for lcore 0 00:05:52.955 [2024-11-17 23:11:16.716742] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:52.955 [2024-11-17 23:11:16.716780] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:52.956 [2024-11-17 23:11:16.716801] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:52.956 [2024-11-17 23:11:16.716833] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:52.956 [2024-11-17 23:11:16.716843] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:52.956 [2024-11-17 23:11:16.716853] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:52.956 23:11:16 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.956 23:11:16 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:52.956 23:11:16 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.956 23:11:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:53.217 [2024-11-17 23:11:16.802747] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:53.217 23:11:16 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.217 23:11:16 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:53.217 23:11:16 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.217 23:11:16 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.217 23:11:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:53.217 ************************************ 00:05:53.217 START TEST scheduler_create_thread 00:05:53.217 ************************************ 00:05:53.217 23:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:53.217 23:11:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:53.217 23:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.217 23:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.217 2 00:05:53.217 23:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.217 23:11:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:53.217 23:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.217 23:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.217 3 00:05:53.217 23:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.217 23:11:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:53.217 23:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.217 23:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.217 4 00:05:53.217 23:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.217 23:11:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:53.217 23:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.217 23:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.217 5 00:05:53.217 23:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.217 23:11:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:53.217 23:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.217 23:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.217 6 00:05:53.217 23:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.217 23:11:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:53.217 23:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.218 23:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.218 7 00:05:53.218 23:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.218 23:11:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:53.218 23:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.218 23:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.218 8 00:05:53.218 23:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.218 23:11:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:53.218 23:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.218 23:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.218 9 00:05:53.218 23:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.218 23:11:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:53.218 23:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.218 23:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.218 10 00:05:53.218 23:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.218 23:11:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:53.218 23:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.218 23:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.218 23:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.218 23:11:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:53.218 23:11:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:53.218 23:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.218 23:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.218 23:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.218 23:11:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:53.218 23:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.218 23:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.218 23:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.218 23:11:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:53.218 23:11:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:53.218 23:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.218 23:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.802 23:11:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.802 00:05:53.802 real 0m0.592s 00:05:53.802 user 0m0.015s 00:05:53.802 sys 0m0.002s 00:05:53.802 ************************************ 00:05:53.802 END TEST scheduler_create_thread 00:05:53.802 ************************************ 00:05:53.802 23:11:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.802 23:11:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.802 23:11:17 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:53.802 23:11:17 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70201 00:05:53.803 23:11:17 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 70201 ']' 00:05:53.803 23:11:17 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 70201 00:05:53.803 23:11:17 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:53.803 23:11:17 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.803 23:11:17 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70201 00:05:53.803 23:11:17 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:53.803 23:11:17 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:53.803 killing process with pid 70201 00:05:53.803 23:11:17 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70201' 00:05:53.803 23:11:17 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 70201 00:05:53.803 23:11:17 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 70201 00:05:54.370 [2024-11-17 23:11:17.892337] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:54.370 00:05:54.370 real 0m2.373s 00:05:54.370 user 0m4.662s 00:05:54.370 sys 0m0.350s 00:05:54.370 ************************************ 00:05:54.370 END TEST event_scheduler 00:05:54.370 ************************************ 00:05:54.370 23:11:18 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.370 23:11:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.370 23:11:18 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:54.370 23:11:18 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:54.370 23:11:18 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.370 23:11:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.370 23:11:18 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.370 ************************************ 00:05:54.370 START TEST app_repeat 00:05:54.370 ************************************ 00:05:54.370 23:11:18 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:54.370 23:11:18 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.370 23:11:18 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.370 23:11:18 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:54.370 23:11:18 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.370 23:11:18 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:54.370 23:11:18 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:54.370 23:11:18 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:54.370 23:11:18 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70274 00:05:54.370 Process app_repeat pid: 70274 00:05:54.370 23:11:18 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.370 23:11:18 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70274' 00:05:54.370 spdk_app_start Round 0 00:05:54.370 23:11:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:54.370 23:11:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:54.370 23:11:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70274 /var/tmp/spdk-nbd.sock 00:05:54.370 23:11:18 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70274 ']' 00:05:54.370 23:11:18 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:54.370 23:11:18 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.371 23:11:18 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:54.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:54.371 23:11:18 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:54.371 23:11:18 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.371 23:11:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:54.371 [2024-11-17 23:11:18.108502] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:54.371 [2024-11-17 23:11:18.108617] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70274 ] 00:05:54.630 [2024-11-17 23:11:18.247137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.630 [2024-11-17 23:11:18.267046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.630 [2024-11-17 23:11:18.267244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.630 23:11:18 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.630 23:11:18 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:54.630 23:11:18 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.890 Malloc0 00:05:54.890 23:11:18 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.149 Malloc1 00:05:55.149 23:11:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.149 23:11:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.149 23:11:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.149 23:11:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:55.149 23:11:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.149 23:11:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:55.149 23:11:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.149 23:11:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.149 23:11:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.149 23:11:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:55.149 23:11:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.149 23:11:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:55.149 23:11:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:55.149 23:11:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:55.149 23:11:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.149 23:11:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:55.407 /dev/nbd0 00:05:55.407 23:11:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:55.407 23:11:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:55.407 23:11:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:55.407 23:11:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:55.407 23:11:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:55.407 23:11:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:55.407 23:11:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:55.407 23:11:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:55.407 23:11:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:55.407 23:11:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:55.407 23:11:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.408 1+0 records in 00:05:55.408 1+0 records out 00:05:55.408 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195414 s, 21.0 MB/s 00:05:55.408 23:11:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:55.408 23:11:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:55.408 23:11:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:55.408 23:11:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:55.408 23:11:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:55.408 23:11:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.408 23:11:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.408 23:11:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:55.408 /dev/nbd1 00:05:55.408 23:11:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:55.408 23:11:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:55.408 23:11:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:55.408 23:11:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:55.408 23:11:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:55.408 23:11:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:55.408 23:11:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:55.408 23:11:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:55.408 23:11:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:55.408 23:11:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:55.408 23:11:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.408 1+0 records in 00:05:55.408 1+0 records out 00:05:55.408 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299746 s, 13.7 MB/s 00:05:55.408 23:11:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:55.408 23:11:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:55.408 23:11:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:55.408 23:11:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:55.408 23:11:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:55.408 23:11:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.408 23:11:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.408 23:11:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.408 23:11:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.671 23:11:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.671 23:11:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:55.671 { 00:05:55.671 "nbd_device": "/dev/nbd0", 00:05:55.671 "bdev_name": "Malloc0" 00:05:55.671 }, 00:05:55.671 { 00:05:55.671 "nbd_device": "/dev/nbd1", 00:05:55.671 "bdev_name": "Malloc1" 00:05:55.671 } 00:05:55.671 ]' 00:05:55.671 23:11:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:55.671 { 00:05:55.671 "nbd_device": "/dev/nbd0", 00:05:55.671 "bdev_name": "Malloc0" 00:05:55.671 }, 00:05:55.671 { 00:05:55.671 "nbd_device": "/dev/nbd1", 00:05:55.671 "bdev_name": "Malloc1" 00:05:55.671 } 00:05:55.671 ]' 00:05:55.671 23:11:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.671 23:11:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:55.671 /dev/nbd1' 00:05:55.671 23:11:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:55.671 /dev/nbd1' 00:05:55.671 23:11:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.671 23:11:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:55.671 23:11:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:55.671 23:11:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:55.671 23:11:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:55.671 23:11:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:55.671 23:11:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.671 23:11:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.671 23:11:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:55.671 23:11:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:55.671 23:11:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:55.672 23:11:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:55.672 256+0 records in 00:05:55.672 256+0 records out 00:05:55.672 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00831899 s, 126 MB/s 00:05:55.672 23:11:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.672 23:11:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:55.672 256+0 records in 00:05:55.672 256+0 records out 00:05:55.672 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152926 s, 68.6 MB/s 00:05:55.672 23:11:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.672 23:11:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:55.934 256+0 records in 00:05:55.934 256+0 records out 00:05:55.934 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0322051 s, 32.6 MB/s 00:05:55.934 23:11:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:55.934 23:11:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.934 23:11:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.934 23:11:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:55.934 23:11:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:55.934 23:11:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:55.934 23:11:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:55.934 23:11:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.934 23:11:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:55.934 23:11:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.934 23:11:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:55.934 23:11:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:55.934 23:11:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:55.934 23:11:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.934 23:11:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.934 23:11:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:55.934 23:11:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:55.934 23:11:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.934 23:11:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:55.934 23:11:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:55.934 23:11:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:55.934 23:11:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:55.934 23:11:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.934 23:11:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.934 23:11:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:55.934 23:11:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.934 23:11:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.934 23:11:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.934 23:11:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:56.195 23:11:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:56.195 23:11:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:56.195 23:11:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:56.195 23:11:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.195 23:11:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.195 23:11:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:56.195 23:11:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:56.195 23:11:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.195 23:11:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.195 23:11:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.195 23:11:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.453 23:11:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:56.453 23:11:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:56.453 23:11:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.453 23:11:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:56.453 23:11:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:56.453 23:11:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.453 23:11:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:56.454 23:11:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:56.454 23:11:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:56.454 23:11:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:56.454 23:11:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:56.454 23:11:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:56.454 23:11:20 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:56.712 23:11:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:56.712 [2024-11-17 23:11:20.519978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:56.971 [2024-11-17 23:11:20.538704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.971 [2024-11-17 23:11:20.538807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.971 [2024-11-17 23:11:20.570759] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:56.971 [2024-11-17 23:11:20.570809] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:00.276 23:11:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:00.276 spdk_app_start Round 1 00:06:00.276 23:11:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:00.276 23:11:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70274 /var/tmp/spdk-nbd.sock 00:06:00.276 23:11:23 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70274 ']' 00:06:00.276 23:11:23 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:00.276 23:11:23 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:00.276 23:11:23 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:00.276 23:11:23 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.276 23:11:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:00.276 23:11:23 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.276 23:11:23 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:00.276 23:11:23 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.276 Malloc0 00:06:00.276 23:11:23 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.276 Malloc1 00:06:00.276 23:11:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.276 23:11:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.276 23:11:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.276 23:11:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:00.276 23:11:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.276 23:11:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:00.276 23:11:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.276 23:11:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.276 23:11:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.276 23:11:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:00.276 23:11:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.276 23:11:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:00.276 23:11:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:00.276 23:11:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:00.276 23:11:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.276 23:11:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:00.534 /dev/nbd0 00:06:00.534 23:11:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:00.534 23:11:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:00.534 23:11:24 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:00.534 23:11:24 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:00.534 23:11:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:00.534 23:11:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:00.534 23:11:24 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:00.534 23:11:24 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:00.534 23:11:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:00.534 23:11:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:00.534 23:11:24 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.534 1+0 records in 00:06:00.534 1+0 records out 00:06:00.534 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000159317 s, 25.7 MB/s 00:06:00.534 23:11:24 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.534 23:11:24 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:00.534 23:11:24 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.534 23:11:24 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:00.534 23:11:24 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:00.534 23:11:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.534 23:11:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.534 23:11:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:00.794 /dev/nbd1 00:06:00.794 23:11:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:00.794 23:11:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:00.794 23:11:24 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:00.794 23:11:24 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:00.794 23:11:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:00.794 23:11:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:00.794 23:11:24 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:00.794 23:11:24 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:00.794 23:11:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:00.794 23:11:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:00.794 23:11:24 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.794 1+0 records in 00:06:00.794 1+0 records out 00:06:00.794 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000179581 s, 22.8 MB/s 00:06:00.794 23:11:24 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.794 23:11:24 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:00.794 23:11:24 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.794 23:11:24 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:00.794 23:11:24 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:00.794 23:11:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.794 23:11:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.794 23:11:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.794 23:11:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.794 23:11:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.052 23:11:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:01.052 { 00:06:01.052 "nbd_device": "/dev/nbd0", 00:06:01.052 "bdev_name": "Malloc0" 00:06:01.052 }, 00:06:01.052 { 00:06:01.052 "nbd_device": "/dev/nbd1", 00:06:01.052 "bdev_name": "Malloc1" 00:06:01.052 } 00:06:01.052 ]' 00:06:01.052 23:11:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:01.052 { 00:06:01.053 "nbd_device": "/dev/nbd0", 00:06:01.053 "bdev_name": "Malloc0" 00:06:01.053 }, 00:06:01.053 { 00:06:01.053 "nbd_device": "/dev/nbd1", 00:06:01.053 "bdev_name": "Malloc1" 00:06:01.053 } 00:06:01.053 ]' 00:06:01.053 23:11:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.053 23:11:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:01.053 /dev/nbd1' 00:06:01.053 23:11:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:01.053 /dev/nbd1' 00:06:01.053 23:11:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.053 23:11:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:01.053 23:11:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:01.053 23:11:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:01.053 23:11:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:01.053 23:11:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:01.053 23:11:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.053 23:11:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.053 23:11:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:01.053 23:11:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:01.053 23:11:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:01.053 23:11:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:01.053 256+0 records in 00:06:01.053 256+0 records out 00:06:01.053 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00630577 s, 166 MB/s 00:06:01.053 23:11:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.053 23:11:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:01.053 256+0 records in 00:06:01.053 256+0 records out 00:06:01.053 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116834 s, 89.7 MB/s 00:06:01.053 23:11:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.053 23:11:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:01.053 256+0 records in 00:06:01.053 256+0 records out 00:06:01.053 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0199431 s, 52.6 MB/s 00:06:01.053 23:11:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:01.053 23:11:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.053 23:11:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.053 23:11:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:01.053 23:11:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:01.053 23:11:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:01.053 23:11:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:01.053 23:11:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.053 23:11:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:01.053 23:11:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.053 23:11:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:01.053 23:11:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:01.053 23:11:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:01.053 23:11:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.053 23:11:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.053 23:11:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:01.053 23:11:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:01.053 23:11:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.053 23:11:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:01.311 23:11:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:01.311 23:11:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:01.311 23:11:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:01.311 23:11:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.311 23:11:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.311 23:11:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:01.311 23:11:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.311 23:11:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.311 23:11:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.311 23:11:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:01.570 23:11:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:01.570 23:11:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:01.570 23:11:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:01.570 23:11:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.570 23:11:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.570 23:11:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:01.570 23:11:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.570 23:11:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.570 23:11:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.570 23:11:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.570 23:11:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.829 23:11:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:01.829 23:11:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.829 23:11:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:01.829 23:11:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:01.829 23:11:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:01.829 23:11:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.829 23:11:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:01.829 23:11:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:01.829 23:11:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:01.829 23:11:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:01.829 23:11:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:01.829 23:11:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:01.829 23:11:25 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:02.088 23:11:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:02.088 [2024-11-17 23:11:25.727974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:02.088 [2024-11-17 23:11:25.743502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.088 [2024-11-17 23:11:25.743506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.088 [2024-11-17 23:11:25.771750] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:02.088 [2024-11-17 23:11:25.771794] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:05.368 23:11:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:05.368 spdk_app_start Round 2 00:06:05.368 23:11:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:05.368 23:11:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70274 /var/tmp/spdk-nbd.sock 00:06:05.368 23:11:28 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70274 ']' 00:06:05.368 23:11:28 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:05.368 23:11:28 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:05.368 23:11:28 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:05.368 23:11:28 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.368 23:11:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:05.368 23:11:28 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.368 23:11:28 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:05.368 23:11:28 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.368 Malloc0 00:06:05.368 23:11:29 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.639 Malloc1 00:06:05.639 23:11:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.639 23:11:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.639 23:11:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.640 23:11:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:05.640 23:11:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.640 23:11:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:05.640 23:11:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.640 23:11:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.640 23:11:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.640 23:11:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:05.640 23:11:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.640 23:11:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:05.640 23:11:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:05.640 23:11:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:05.640 23:11:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.640 23:11:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:05.902 /dev/nbd0 00:06:05.902 23:11:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:05.902 23:11:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:05.902 23:11:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:05.902 23:11:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:05.902 23:11:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:05.902 23:11:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:05.902 23:11:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:05.902 23:11:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:05.902 23:11:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:05.902 23:11:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:05.902 23:11:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.902 1+0 records in 00:06:05.902 1+0 records out 00:06:05.902 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000153742 s, 26.6 MB/s 00:06:05.902 23:11:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:05.902 23:11:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:05.902 23:11:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:05.902 23:11:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:05.902 23:11:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:05.902 23:11:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.902 23:11:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.902 23:11:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:05.902 /dev/nbd1 00:06:06.161 23:11:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:06.161 23:11:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:06.161 23:11:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:06.161 23:11:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:06.161 23:11:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:06.161 23:11:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:06.161 23:11:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:06.161 23:11:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:06.161 23:11:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:06.161 23:11:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:06.161 23:11:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.161 1+0 records in 00:06:06.161 1+0 records out 00:06:06.161 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231718 s, 17.7 MB/s 00:06:06.161 23:11:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.161 23:11:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:06.161 23:11:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.161 23:11:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:06.161 23:11:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:06.161 23:11:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.161 23:11:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.161 23:11:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.161 23:11:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.161 23:11:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.161 23:11:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:06.161 { 00:06:06.161 "nbd_device": "/dev/nbd0", 00:06:06.161 "bdev_name": "Malloc0" 00:06:06.161 }, 00:06:06.161 { 00:06:06.161 "nbd_device": "/dev/nbd1", 00:06:06.161 "bdev_name": "Malloc1" 00:06:06.161 } 00:06:06.161 ]' 00:06:06.161 23:11:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:06.161 { 00:06:06.161 "nbd_device": "/dev/nbd0", 00:06:06.161 "bdev_name": "Malloc0" 00:06:06.161 }, 00:06:06.161 { 00:06:06.161 "nbd_device": "/dev/nbd1", 00:06:06.161 "bdev_name": "Malloc1" 00:06:06.161 } 00:06:06.161 ]' 00:06:06.161 23:11:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.418 23:11:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:06.418 /dev/nbd1' 00:06:06.418 23:11:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:06.418 /dev/nbd1' 00:06:06.418 23:11:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.418 23:11:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:06.418 23:11:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:06.418 23:11:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:06.418 23:11:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:06.418 23:11:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:06.418 23:11:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.418 23:11:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.418 23:11:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:06.418 23:11:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:06.418 23:11:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:06.418 23:11:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:06.418 256+0 records in 00:06:06.418 256+0 records out 00:06:06.418 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00823591 s, 127 MB/s 00:06:06.418 23:11:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.418 23:11:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:06.418 256+0 records in 00:06:06.418 256+0 records out 00:06:06.418 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146764 s, 71.4 MB/s 00:06:06.418 23:11:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.418 23:11:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:06.418 256+0 records in 00:06:06.418 256+0 records out 00:06:06.418 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0155602 s, 67.4 MB/s 00:06:06.418 23:11:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:06.418 23:11:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.418 23:11:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.418 23:11:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:06.418 23:11:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:06.418 23:11:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:06.418 23:11:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:06.418 23:11:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.418 23:11:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:06.418 23:11:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.418 23:11:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:06.418 23:11:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:06.418 23:11:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:06.418 23:11:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.418 23:11:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.418 23:11:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:06.418 23:11:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:06.418 23:11:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.418 23:11:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:06.679 23:11:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:06.679 23:11:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:06.679 23:11:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:06.679 23:11:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.679 23:11:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.679 23:11:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:06.679 23:11:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.679 23:11:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.679 23:11:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.679 23:11:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:06.679 23:11:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:06.679 23:11:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:06.679 23:11:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:06.679 23:11:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.679 23:11:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.679 23:11:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:06.679 23:11:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.679 23:11:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.679 23:11:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.679 23:11:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.679 23:11:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.940 23:11:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:06.940 23:11:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.940 23:11:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:06.940 23:11:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:06.940 23:11:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.940 23:11:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:06.940 23:11:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:06.940 23:11:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:06.940 23:11:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:06.940 23:11:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:06.940 23:11:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:06.940 23:11:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:06.940 23:11:30 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:07.200 23:11:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:07.200 [2024-11-17 23:11:31.001782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.461 [2024-11-17 23:11:31.021382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.461 [2024-11-17 23:11:31.021479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.461 [2024-11-17 23:11:31.053988] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:07.461 [2024-11-17 23:11:31.054036] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:10.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:10.764 23:11:33 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70274 /var/tmp/spdk-nbd.sock 00:06:10.764 23:11:33 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70274 ']' 00:06:10.764 23:11:33 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:10.764 23:11:33 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.764 23:11:33 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:10.764 23:11:33 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.764 23:11:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:10.764 23:11:34 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.764 23:11:34 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:10.764 23:11:34 event.app_repeat -- event/event.sh@39 -- # killprocess 70274 00:06:10.764 23:11:34 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 70274 ']' 00:06:10.764 23:11:34 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 70274 00:06:10.764 23:11:34 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:10.764 23:11:34 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.764 23:11:34 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70274 00:06:10.764 killing process with pid 70274 00:06:10.764 23:11:34 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.764 23:11:34 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.764 23:11:34 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70274' 00:06:10.764 23:11:34 event.app_repeat -- common/autotest_common.sh@973 -- # kill 70274 00:06:10.764 23:11:34 event.app_repeat -- common/autotest_common.sh@978 -- # wait 70274 00:06:10.764 spdk_app_start is called in Round 0. 00:06:10.764 Shutdown signal received, stop current app iteration 00:06:10.764 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 reinitialization... 00:06:10.764 spdk_app_start is called in Round 1. 00:06:10.764 Shutdown signal received, stop current app iteration 00:06:10.764 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 reinitialization... 00:06:10.764 spdk_app_start is called in Round 2. 00:06:10.764 Shutdown signal received, stop current app iteration 00:06:10.764 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 reinitialization... 00:06:10.764 spdk_app_start is called in Round 3. 00:06:10.764 Shutdown signal received, stop current app iteration 00:06:10.764 23:11:34 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:10.764 23:11:34 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:10.764 00:06:10.764 real 0m16.201s 00:06:10.764 user 0m36.351s 00:06:10.764 sys 0m1.939s 00:06:10.764 23:11:34 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.764 ************************************ 00:06:10.764 END TEST app_repeat 00:06:10.764 ************************************ 00:06:10.764 23:11:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:10.764 23:11:34 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:10.765 23:11:34 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:10.765 23:11:34 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.765 23:11:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.765 23:11:34 event -- common/autotest_common.sh@10 -- # set +x 00:06:10.765 ************************************ 00:06:10.765 START TEST cpu_locks 00:06:10.765 ************************************ 00:06:10.765 23:11:34 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:10.765 * Looking for test storage... 00:06:10.765 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:10.765 23:11:34 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:10.765 23:11:34 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:10.765 23:11:34 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:10.765 23:11:34 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:10.765 23:11:34 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.765 23:11:34 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.765 23:11:34 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.765 23:11:34 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.765 23:11:34 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.765 23:11:34 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.765 23:11:34 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.765 23:11:34 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.765 23:11:34 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.765 23:11:34 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.765 23:11:34 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.765 23:11:34 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:10.765 23:11:34 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:10.765 23:11:34 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.765 23:11:34 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.765 23:11:34 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:10.765 23:11:34 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:10.765 23:11:34 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.765 23:11:34 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:10.765 23:11:34 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.765 23:11:34 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:10.765 23:11:34 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:10.765 23:11:34 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.765 23:11:34 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:10.765 23:11:34 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.765 23:11:34 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.765 23:11:34 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.765 23:11:34 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:10.765 23:11:34 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.765 23:11:34 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:10.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.765 --rc genhtml_branch_coverage=1 00:06:10.765 --rc genhtml_function_coverage=1 00:06:10.765 --rc genhtml_legend=1 00:06:10.765 --rc geninfo_all_blocks=1 00:06:10.765 --rc geninfo_unexecuted_blocks=1 00:06:10.765 00:06:10.765 ' 00:06:10.765 23:11:34 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:10.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.765 --rc genhtml_branch_coverage=1 00:06:10.765 --rc genhtml_function_coverage=1 00:06:10.765 --rc genhtml_legend=1 00:06:10.765 --rc geninfo_all_blocks=1 00:06:10.765 --rc geninfo_unexecuted_blocks=1 00:06:10.765 00:06:10.765 ' 00:06:10.765 23:11:34 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:10.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.765 --rc genhtml_branch_coverage=1 00:06:10.765 --rc genhtml_function_coverage=1 00:06:10.765 --rc genhtml_legend=1 00:06:10.765 --rc geninfo_all_blocks=1 00:06:10.765 --rc geninfo_unexecuted_blocks=1 00:06:10.765 00:06:10.765 ' 00:06:10.765 23:11:34 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:10.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.765 --rc genhtml_branch_coverage=1 00:06:10.765 --rc genhtml_function_coverage=1 00:06:10.765 --rc genhtml_legend=1 00:06:10.765 --rc geninfo_all_blocks=1 00:06:10.765 --rc geninfo_unexecuted_blocks=1 00:06:10.765 00:06:10.765 ' 00:06:10.765 23:11:34 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:10.765 23:11:34 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:10.765 23:11:34 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:10.765 23:11:34 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:10.765 23:11:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.765 23:11:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.765 23:11:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.765 ************************************ 00:06:10.765 START TEST default_locks 00:06:10.765 ************************************ 00:06:10.765 23:11:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:10.765 23:11:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=70686 00:06:10.765 23:11:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 70686 00:06:10.765 23:11:34 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 70686 ']' 00:06:10.765 23:11:34 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.765 23:11:34 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.765 23:11:34 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.765 23:11:34 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.765 23:11:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.765 23:11:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:10.765 [2024-11-17 23:11:34.557380] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:10.765 [2024-11-17 23:11:34.557502] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70686 ] 00:06:11.026 [2024-11-17 23:11:34.703538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.026 [2024-11-17 23:11:34.725097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.633 23:11:35 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.633 23:11:35 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:11.633 23:11:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 70686 00:06:11.633 23:11:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:11.633 23:11:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 70686 00:06:11.910 23:11:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 70686 00:06:11.910 23:11:35 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 70686 ']' 00:06:11.910 23:11:35 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 70686 00:06:11.910 23:11:35 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:11.910 23:11:35 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.910 23:11:35 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70686 00:06:11.910 23:11:35 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.910 23:11:35 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.910 killing process with pid 70686 00:06:11.910 23:11:35 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70686' 00:06:11.910 23:11:35 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 70686 00:06:11.910 23:11:35 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 70686 00:06:12.171 23:11:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 70686 00:06:12.171 23:11:35 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:12.171 23:11:35 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 70686 00:06:12.171 23:11:35 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:12.171 23:11:35 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:12.171 23:11:35 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:12.171 23:11:35 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:12.171 23:11:35 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 70686 00:06:12.171 23:11:35 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 70686 ']' 00:06:12.171 23:11:35 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.171 23:11:35 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.171 23:11:35 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.171 23:11:35 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.171 23:11:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.171 ERROR: process (pid: 70686) is no longer running 00:06:12.171 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (70686) - No such process 00:06:12.171 23:11:35 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.171 23:11:35 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:12.171 23:11:35 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:12.171 23:11:35 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:12.172 23:11:35 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:12.172 23:11:35 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:12.172 23:11:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:12.172 23:11:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:12.172 23:11:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:12.172 23:11:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:12.172 00:06:12.172 real 0m1.428s 00:06:12.172 user 0m1.437s 00:06:12.172 sys 0m0.431s 00:06:12.172 23:11:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.172 ************************************ 00:06:12.172 END TEST default_locks 00:06:12.172 23:11:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.172 ************************************ 00:06:12.172 23:11:35 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:12.172 23:11:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.172 23:11:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.172 23:11:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.172 ************************************ 00:06:12.172 START TEST default_locks_via_rpc 00:06:12.172 ************************************ 00:06:12.172 23:11:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:12.172 23:11:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=70733 00:06:12.172 23:11:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 70733 00:06:12.172 23:11:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 70733 ']' 00:06:12.172 23:11:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:12.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.172 23:11:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.172 23:11:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.172 23:11:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.172 23:11:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.172 23:11:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.433 [2024-11-17 23:11:36.049675] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:12.433 [2024-11-17 23:11:36.049821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70733 ] 00:06:12.433 [2024-11-17 23:11:36.199159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.433 [2024-11-17 23:11:36.229296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.375 23:11:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.375 23:11:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:13.375 23:11:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:13.375 23:11:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.375 23:11:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.375 23:11:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.375 23:11:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:13.375 23:11:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:13.375 23:11:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:13.375 23:11:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:13.375 23:11:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:13.375 23:11:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.375 23:11:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.375 23:11:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.375 23:11:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 70733 00:06:13.375 23:11:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.375 23:11:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 70733 00:06:13.375 23:11:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 70733 00:06:13.375 23:11:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 70733 ']' 00:06:13.375 23:11:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 70733 00:06:13.375 23:11:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:13.375 23:11:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.375 23:11:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70733 00:06:13.375 23:11:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.375 23:11:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.375 killing process with pid 70733 00:06:13.375 23:11:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70733' 00:06:13.375 23:11:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 70733 00:06:13.375 23:11:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 70733 00:06:13.947 00:06:13.947 real 0m1.505s 00:06:13.947 user 0m1.471s 00:06:13.947 sys 0m0.508s 00:06:13.947 23:11:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.947 ************************************ 00:06:13.947 23:11:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.947 END TEST default_locks_via_rpc 00:06:13.947 ************************************ 00:06:13.947 23:11:37 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:13.947 23:11:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.947 23:11:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.947 23:11:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.947 ************************************ 00:06:13.947 START TEST non_locking_app_on_locked_coremask 00:06:13.947 ************************************ 00:06:13.947 23:11:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:13.947 23:11:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=70780 00:06:13.947 23:11:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 70780 /var/tmp/spdk.sock 00:06:13.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.947 23:11:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70780 ']' 00:06:13.947 23:11:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.947 23:11:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.947 23:11:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.947 23:11:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.947 23:11:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.947 23:11:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:13.947 [2024-11-17 23:11:37.611567] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:13.948 [2024-11-17 23:11:37.611726] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70780 ] 00:06:13.948 [2024-11-17 23:11:37.751172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.209 [2024-11-17 23:11:37.780676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.783 23:11:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.783 23:11:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:14.783 23:11:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=70796 00:06:14.783 23:11:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 70796 /var/tmp/spdk2.sock 00:06:14.783 23:11:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70796 ']' 00:06:14.783 23:11:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.783 23:11:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.783 23:11:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:14.783 23:11:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.783 23:11:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.783 23:11:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.783 [2024-11-17 23:11:38.544440] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:14.784 [2024-11-17 23:11:38.544594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70796 ] 00:06:15.044 [2024-11-17 23:11:38.705471] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:15.044 [2024-11-17 23:11:38.705542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.044 [2024-11-17 23:11:38.768584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.615 23:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.615 23:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:15.615 23:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 70780 00:06:15.615 23:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70780 00:06:15.615 23:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:16.192 23:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 70780 00:06:16.192 23:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70780 ']' 00:06:16.192 23:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 70780 00:06:16.192 23:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:16.192 23:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.192 23:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70780 00:06:16.192 23:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.192 killing process with pid 70780 00:06:16.192 23:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.192 23:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70780' 00:06:16.192 23:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 70780 00:06:16.192 23:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 70780 00:06:16.456 23:11:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 70796 00:06:16.456 23:11:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70796 ']' 00:06:16.456 23:11:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 70796 00:06:16.456 23:11:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:16.456 23:11:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.456 23:11:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70796 00:06:16.456 23:11:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.456 23:11:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.456 killing process with pid 70796 00:06:16.456 23:11:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70796' 00:06:16.456 23:11:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 70796 00:06:16.456 23:11:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 70796 00:06:16.717 00:06:16.717 real 0m2.940s 00:06:16.717 user 0m3.078s 00:06:16.717 sys 0m0.981s 00:06:16.717 23:11:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.717 ************************************ 00:06:16.717 END TEST non_locking_app_on_locked_coremask 00:06:16.717 ************************************ 00:06:16.717 23:11:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.717 23:11:40 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:16.717 23:11:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.717 23:11:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.717 23:11:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.717 ************************************ 00:06:16.717 START TEST locking_app_on_unlocked_coremask 00:06:16.717 ************************************ 00:06:16.717 23:11:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:16.717 23:11:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=70854 00:06:16.717 23:11:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 70854 /var/tmp/spdk.sock 00:06:16.717 23:11:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70854 ']' 00:06:16.717 23:11:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.717 23:11:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.717 23:11:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.717 23:11:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.717 23:11:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.717 23:11:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:16.993 [2024-11-17 23:11:40.588610] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:16.993 [2024-11-17 23:11:40.588710] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70854 ] 00:06:16.993 [2024-11-17 23:11:40.716740] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:16.994 [2024-11-17 23:11:40.716786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.994 [2024-11-17 23:11:40.735595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.937 23:11:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.938 23:11:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:17.938 23:11:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=70870 00:06:17.938 23:11:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 70870 /var/tmp/spdk2.sock 00:06:17.938 23:11:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70870 ']' 00:06:17.938 23:11:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.938 23:11:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:17.938 23:11:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.938 23:11:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.938 23:11:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.938 23:11:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.938 [2024-11-17 23:11:41.509135] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:17.938 [2024-11-17 23:11:41.509264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70870 ] 00:06:17.938 [2024-11-17 23:11:41.657197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.938 [2024-11-17 23:11:41.689795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.874 23:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.874 23:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:18.874 23:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 70870 00:06:18.874 23:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.875 23:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70870 00:06:18.875 23:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 70854 00:06:18.875 23:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70854 ']' 00:06:18.875 23:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 70854 00:06:18.875 23:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:18.875 23:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.875 23:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70854 00:06:18.875 23:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.875 killing process with pid 70854 00:06:18.875 23:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.875 23:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70854' 00:06:18.875 23:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 70854 00:06:18.875 23:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 70854 00:06:19.444 23:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 70870 00:06:19.444 23:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70870 ']' 00:06:19.444 23:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 70870 00:06:19.444 23:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:19.444 23:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.444 23:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70870 00:06:19.444 23:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.444 killing process with pid 70870 00:06:19.444 23:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.444 23:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70870' 00:06:19.444 23:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 70870 00:06:19.444 23:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 70870 00:06:19.704 00:06:19.704 real 0m2.789s 00:06:19.704 user 0m3.125s 00:06:19.704 sys 0m0.721s 00:06:19.705 23:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.705 ************************************ 00:06:19.705 END TEST locking_app_on_unlocked_coremask 00:06:19.705 ************************************ 00:06:19.705 23:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.705 23:11:43 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:19.705 23:11:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.705 23:11:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.705 23:11:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.705 ************************************ 00:06:19.705 START TEST locking_app_on_locked_coremask 00:06:19.705 ************************************ 00:06:19.705 23:11:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:19.705 23:11:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=70928 00:06:19.705 23:11:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 70928 /var/tmp/spdk.sock 00:06:19.705 23:11:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70928 ']' 00:06:19.705 23:11:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.705 23:11:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.705 23:11:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.705 23:11:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.705 23:11:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.705 23:11:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.705 [2024-11-17 23:11:43.444993] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:19.705 [2024-11-17 23:11:43.445110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70928 ] 00:06:19.965 [2024-11-17 23:11:43.585776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.965 [2024-11-17 23:11:43.603137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.537 23:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.537 23:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:20.537 23:11:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=70944 00:06:20.537 23:11:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 70944 /var/tmp/spdk2.sock 00:06:20.537 23:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:20.537 23:11:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:20.537 23:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 70944 /var/tmp/spdk2.sock 00:06:20.537 23:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:20.537 23:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.537 23:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:20.537 23:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.537 23:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 70944 /var/tmp/spdk2.sock 00:06:20.537 23:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70944 ']' 00:06:20.537 23:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.537 23:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.537 23:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.537 23:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.537 23:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.537 [2024-11-17 23:11:44.351820] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:20.537 [2024-11-17 23:11:44.352211] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70944 ] 00:06:20.797 [2024-11-17 23:11:44.500108] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 70928 has claimed it. 00:06:20.797 [2024-11-17 23:11:44.500158] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:21.366 ERROR: process (pid: 70944) is no longer running 00:06:21.366 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (70944) - No such process 00:06:21.366 23:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.366 23:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:21.366 23:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:21.366 23:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:21.366 23:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:21.366 23:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:21.366 23:11:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 70928 00:06:21.366 23:11:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70928 00:06:21.366 23:11:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:21.366 23:11:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 70928 00:06:21.366 23:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70928 ']' 00:06:21.366 23:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 70928 00:06:21.366 23:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:21.366 23:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.366 23:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70928 00:06:21.366 23:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:21.367 23:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:21.367 killing process with pid 70928 00:06:21.367 23:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70928' 00:06:21.367 23:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 70928 00:06:21.367 23:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 70928 00:06:21.628 00:06:21.628 real 0m2.027s 00:06:21.628 user 0m2.299s 00:06:21.628 sys 0m0.459s 00:06:21.628 23:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.628 ************************************ 00:06:21.628 END TEST locking_app_on_locked_coremask 00:06:21.628 ************************************ 00:06:21.628 23:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.889 23:11:45 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:21.889 23:11:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.889 23:11:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.889 23:11:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.889 ************************************ 00:06:21.889 START TEST locking_overlapped_coremask 00:06:21.889 ************************************ 00:06:21.890 23:11:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:21.890 23:11:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=70986 00:06:21.890 23:11:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 70986 /var/tmp/spdk.sock 00:06:21.890 23:11:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 70986 ']' 00:06:21.890 23:11:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.890 23:11:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.890 23:11:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.890 23:11:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.890 23:11:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.890 23:11:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:21.890 [2024-11-17 23:11:45.525709] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:21.890 [2024-11-17 23:11:45.525834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70986 ] 00:06:21.890 [2024-11-17 23:11:45.666408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:21.890 [2024-11-17 23:11:45.691964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.890 [2024-11-17 23:11:45.692218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.890 [2024-11-17 23:11:45.692288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.831 23:11:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.831 23:11:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:22.831 23:11:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71004 00:06:22.831 23:11:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71004 /var/tmp/spdk2.sock 00:06:22.831 23:11:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:22.831 23:11:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:22.832 23:11:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71004 /var/tmp/spdk2.sock 00:06:22.832 23:11:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:22.832 23:11:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.832 23:11:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:22.832 23:11:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.832 23:11:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 71004 /var/tmp/spdk2.sock 00:06:22.832 23:11:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 71004 ']' 00:06:22.832 23:11:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.832 23:11:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.832 23:11:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.832 23:11:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.832 23:11:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.832 [2024-11-17 23:11:46.501660] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:22.832 [2024-11-17 23:11:46.502328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71004 ] 00:06:23.092 [2024-11-17 23:11:46.665471] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 70986 has claimed it. 00:06:23.092 [2024-11-17 23:11:46.665531] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:23.351 ERROR: process (pid: 71004) is no longer running 00:06:23.351 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71004) - No such process 00:06:23.351 23:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.351 23:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:23.351 23:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:23.351 23:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:23.351 23:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:23.351 23:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:23.351 23:11:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:23.351 23:11:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:23.351 23:11:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:23.351 23:11:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:23.351 23:11:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 70986 00:06:23.351 23:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 70986 ']' 00:06:23.351 23:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 70986 00:06:23.351 23:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:23.351 23:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.351 23:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70986 00:06:23.351 23:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.351 23:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.351 killing process with pid 70986 00:06:23.351 23:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70986' 00:06:23.351 23:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 70986 00:06:23.351 23:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 70986 00:06:23.611 00:06:23.611 real 0m1.915s 00:06:23.611 user 0m5.444s 00:06:23.611 sys 0m0.361s 00:06:23.611 23:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.611 ************************************ 00:06:23.611 END TEST locking_overlapped_coremask 00:06:23.611 ************************************ 00:06:23.611 23:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.611 23:11:47 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:23.611 23:11:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.611 23:11:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.611 23:11:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.611 ************************************ 00:06:23.611 START TEST locking_overlapped_coremask_via_rpc 00:06:23.611 ************************************ 00:06:23.611 23:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:23.611 23:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71046 00:06:23.611 23:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71046 /var/tmp/spdk.sock 00:06:23.611 23:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71046 ']' 00:06:23.611 23:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.611 23:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.611 23:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.611 23:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:23.611 23:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.611 23:11:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.871 [2024-11-17 23:11:47.481517] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:23.871 [2024-11-17 23:11:47.481611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71046 ] 00:06:23.871 [2024-11-17 23:11:47.620358] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:23.871 [2024-11-17 23:11:47.620406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:23.871 [2024-11-17 23:11:47.640839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.871 [2024-11-17 23:11:47.640841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.871 [2024-11-17 23:11:47.640908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.814 23:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.814 23:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:24.814 23:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71064 00:06:24.814 23:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71064 /var/tmp/spdk2.sock 00:06:24.814 23:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71064 ']' 00:06:24.814 23:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.814 23:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.814 23:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:24.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.814 23:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.814 23:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.814 23:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.814 [2024-11-17 23:11:48.386870] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:24.814 [2024-11-17 23:11:48.387293] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71064 ] 00:06:24.814 [2024-11-17 23:11:48.547160] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:24.814 [2024-11-17 23:11:48.547208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:24.814 [2024-11-17 23:11:48.587003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:24.814 [2024-11-17 23:11:48.591067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.814 [2024-11-17 23:11:48.591219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:25.758 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.758 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:25.758 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:25.758 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.758 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.758 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.758 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:25.758 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:25.758 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:25.758 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:25.758 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.758 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:25.758 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.758 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:25.758 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.758 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.758 [2024-11-17 23:11:49.255001] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71046 has claimed it. 00:06:25.758 request: 00:06:25.758 { 00:06:25.758 "method": "framework_enable_cpumask_locks", 00:06:25.758 "req_id": 1 00:06:25.758 } 00:06:25.758 Got JSON-RPC error response 00:06:25.758 response: 00:06:25.758 { 00:06:25.758 "code": -32603, 00:06:25.758 "message": "Failed to claim CPU core: 2" 00:06:25.758 } 00:06:25.758 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:25.758 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:25.758 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:25.758 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:25.758 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:25.758 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71046 /var/tmp/spdk.sock 00:06:25.758 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71046 ']' 00:06:25.758 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.758 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.758 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.758 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.758 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.758 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.758 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:25.758 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71064 /var/tmp/spdk2.sock 00:06:25.758 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71064 ']' 00:06:25.758 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:25.758 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:25.758 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:25.758 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.758 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.019 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.019 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:26.019 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:26.019 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:26.019 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:26.019 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:26.019 00:06:26.019 real 0m2.268s 00:06:26.019 user 0m1.059s 00:06:26.019 sys 0m0.131s 00:06:26.019 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.019 23:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.019 ************************************ 00:06:26.019 END TEST locking_overlapped_coremask_via_rpc 00:06:26.019 ************************************ 00:06:26.019 23:11:49 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:26.019 23:11:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71046 ]] 00:06:26.019 23:11:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71046 00:06:26.019 23:11:49 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71046 ']' 00:06:26.019 23:11:49 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71046 00:06:26.019 23:11:49 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:26.019 23:11:49 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:26.019 23:11:49 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71046 00:06:26.019 23:11:49 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:26.019 23:11:49 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:26.019 killing process with pid 71046 00:06:26.019 23:11:49 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71046' 00:06:26.019 23:11:49 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 71046 00:06:26.019 23:11:49 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 71046 00:06:26.280 23:11:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71064 ]] 00:06:26.280 23:11:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71064 00:06:26.280 23:11:49 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71064 ']' 00:06:26.280 23:11:49 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71064 00:06:26.280 23:11:49 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:26.280 23:11:49 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:26.280 23:11:49 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71064 00:06:26.280 23:11:49 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:26.280 23:11:50 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:26.280 killing process with pid 71064 00:06:26.280 23:11:50 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71064' 00:06:26.280 23:11:50 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 71064 00:06:26.280 23:11:50 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 71064 00:06:26.541 23:11:50 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:26.541 23:11:50 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:26.541 23:11:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71046 ]] 00:06:26.541 23:11:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71046 00:06:26.541 23:11:50 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71046 ']' 00:06:26.541 23:11:50 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71046 00:06:26.541 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71046) - No such process 00:06:26.541 Process with pid 71046 is not found 00:06:26.541 23:11:50 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 71046 is not found' 00:06:26.541 23:11:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71064 ]] 00:06:26.541 23:11:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71064 00:06:26.541 23:11:50 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71064 ']' 00:06:26.541 23:11:50 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71064 00:06:26.541 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71064) - No such process 00:06:26.541 Process with pid 71064 is not found 00:06:26.541 23:11:50 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 71064 is not found' 00:06:26.541 23:11:50 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:26.541 00:06:26.541 real 0m15.906s 00:06:26.541 user 0m28.162s 00:06:26.541 sys 0m4.293s 00:06:26.541 23:11:50 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.541 23:11:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.541 ************************************ 00:06:26.541 END TEST cpu_locks 00:06:26.541 ************************************ 00:06:26.541 00:06:26.541 real 0m38.740s 00:06:26.541 user 1m15.549s 00:06:26.541 sys 0m7.012s 00:06:26.541 23:11:50 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.541 ************************************ 00:06:26.541 END TEST event 00:06:26.541 ************************************ 00:06:26.541 23:11:50 event -- common/autotest_common.sh@10 -- # set +x 00:06:26.541 23:11:50 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:26.541 23:11:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.541 23:11:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.541 23:11:50 -- common/autotest_common.sh@10 -- # set +x 00:06:26.541 ************************************ 00:06:26.541 START TEST thread 00:06:26.541 ************************************ 00:06:26.541 23:11:50 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:26.802 * Looking for test storage... 00:06:26.802 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:26.802 23:11:50 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:26.802 23:11:50 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:26.802 23:11:50 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:26.802 23:11:50 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:26.802 23:11:50 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.802 23:11:50 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.802 23:11:50 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.802 23:11:50 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.802 23:11:50 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.802 23:11:50 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.802 23:11:50 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.802 23:11:50 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.802 23:11:50 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.802 23:11:50 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.802 23:11:50 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.802 23:11:50 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:26.802 23:11:50 thread -- scripts/common.sh@345 -- # : 1 00:06:26.802 23:11:50 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.802 23:11:50 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.802 23:11:50 thread -- scripts/common.sh@365 -- # decimal 1 00:06:26.802 23:11:50 thread -- scripts/common.sh@353 -- # local d=1 00:06:26.802 23:11:50 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.802 23:11:50 thread -- scripts/common.sh@355 -- # echo 1 00:06:26.802 23:11:50 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.802 23:11:50 thread -- scripts/common.sh@366 -- # decimal 2 00:06:26.802 23:11:50 thread -- scripts/common.sh@353 -- # local d=2 00:06:26.802 23:11:50 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.802 23:11:50 thread -- scripts/common.sh@355 -- # echo 2 00:06:26.802 23:11:50 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.802 23:11:50 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.802 23:11:50 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.802 23:11:50 thread -- scripts/common.sh@368 -- # return 0 00:06:26.802 23:11:50 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.802 23:11:50 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:26.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.802 --rc genhtml_branch_coverage=1 00:06:26.802 --rc genhtml_function_coverage=1 00:06:26.802 --rc genhtml_legend=1 00:06:26.802 --rc geninfo_all_blocks=1 00:06:26.802 --rc geninfo_unexecuted_blocks=1 00:06:26.802 00:06:26.802 ' 00:06:26.802 23:11:50 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:26.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.802 --rc genhtml_branch_coverage=1 00:06:26.802 --rc genhtml_function_coverage=1 00:06:26.802 --rc genhtml_legend=1 00:06:26.802 --rc geninfo_all_blocks=1 00:06:26.802 --rc geninfo_unexecuted_blocks=1 00:06:26.802 00:06:26.802 ' 00:06:26.802 23:11:50 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:26.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.803 --rc genhtml_branch_coverage=1 00:06:26.803 --rc genhtml_function_coverage=1 00:06:26.803 --rc genhtml_legend=1 00:06:26.803 --rc geninfo_all_blocks=1 00:06:26.803 --rc geninfo_unexecuted_blocks=1 00:06:26.803 00:06:26.803 ' 00:06:26.803 23:11:50 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:26.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.803 --rc genhtml_branch_coverage=1 00:06:26.803 --rc genhtml_function_coverage=1 00:06:26.803 --rc genhtml_legend=1 00:06:26.803 --rc geninfo_all_blocks=1 00:06:26.803 --rc geninfo_unexecuted_blocks=1 00:06:26.803 00:06:26.803 ' 00:06:26.803 23:11:50 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:26.803 23:11:50 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:26.803 23:11:50 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.803 23:11:50 thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.803 ************************************ 00:06:26.803 START TEST thread_poller_perf 00:06:26.803 ************************************ 00:06:26.803 23:11:50 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:26.803 [2024-11-17 23:11:50.483530] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:26.803 [2024-11-17 23:11:50.483654] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71191 ] 00:06:27.064 [2024-11-17 23:11:50.624923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.064 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:27.064 [2024-11-17 23:11:50.647908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.004 [2024-11-17T23:11:51.825Z] ====================================== 00:06:28.004 [2024-11-17T23:11:51.825Z] busy:2610879990 (cyc) 00:06:28.004 [2024-11-17T23:11:51.825Z] total_run_count: 412000 00:06:28.004 [2024-11-17T23:11:51.825Z] tsc_hz: 2600000000 (cyc) 00:06:28.004 [2024-11-17T23:11:51.825Z] ====================================== 00:06:28.004 [2024-11-17T23:11:51.825Z] poller_cost: 6337 (cyc), 2437 (nsec) 00:06:28.004 00:06:28.004 real 0m1.230s 00:06:28.004 user 0m1.076s 00:06:28.004 sys 0m0.049s 00:06:28.004 23:11:51 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.004 ************************************ 00:06:28.004 END TEST thread_poller_perf 00:06:28.004 ************************************ 00:06:28.004 23:11:51 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:28.004 23:11:51 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:28.004 23:11:51 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:28.004 23:11:51 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.004 23:11:51 thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.004 ************************************ 00:06:28.004 START TEST thread_poller_perf 00:06:28.004 ************************************ 00:06:28.004 23:11:51 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:28.004 [2024-11-17 23:11:51.778147] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:28.004 [2024-11-17 23:11:51.778288] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71222 ] 00:06:28.263 [2024-11-17 23:11:51.917956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.263 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:28.263 [2024-11-17 23:11:51.939654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.205 [2024-11-17T23:11:53.026Z] ====================================== 00:06:29.205 [2024-11-17T23:11:53.026Z] busy:2602932122 (cyc) 00:06:29.205 [2024-11-17T23:11:53.026Z] total_run_count: 5342000 00:06:29.205 [2024-11-17T23:11:53.026Z] tsc_hz: 2600000000 (cyc) 00:06:29.205 [2024-11-17T23:11:53.026Z] ====================================== 00:06:29.205 [2024-11-17T23:11:53.026Z] poller_cost: 487 (cyc), 187 (nsec) 00:06:29.205 00:06:29.205 real 0m1.229s 00:06:29.205 user 0m1.068s 00:06:29.205 sys 0m0.056s 00:06:29.205 23:11:52 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.205 ************************************ 00:06:29.205 END TEST thread_poller_perf 00:06:29.205 23:11:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:29.205 ************************************ 00:06:29.466 23:11:53 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:29.466 00:06:29.466 real 0m2.713s 00:06:29.466 user 0m2.249s 00:06:29.466 sys 0m0.231s 00:06:29.466 23:11:53 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.466 ************************************ 00:06:29.466 END TEST thread 00:06:29.466 ************************************ 00:06:29.466 23:11:53 thread -- common/autotest_common.sh@10 -- # set +x 00:06:29.466 23:11:53 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:29.466 23:11:53 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:29.466 23:11:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.466 23:11:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.466 23:11:53 -- common/autotest_common.sh@10 -- # set +x 00:06:29.466 ************************************ 00:06:29.466 START TEST app_cmdline 00:06:29.466 ************************************ 00:06:29.466 23:11:53 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:29.466 * Looking for test storage... 00:06:29.466 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:29.466 23:11:53 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:29.466 23:11:53 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:29.466 23:11:53 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:29.466 23:11:53 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:29.466 23:11:53 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.466 23:11:53 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.466 23:11:53 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.466 23:11:53 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.466 23:11:53 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.466 23:11:53 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.466 23:11:53 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.466 23:11:53 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.466 23:11:53 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.466 23:11:53 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.466 23:11:53 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.466 23:11:53 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:29.466 23:11:53 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:29.466 23:11:53 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.466 23:11:53 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.466 23:11:53 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:29.466 23:11:53 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:29.466 23:11:53 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.466 23:11:53 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:29.466 23:11:53 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.466 23:11:53 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:29.466 23:11:53 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:29.466 23:11:53 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.466 23:11:53 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:29.466 23:11:53 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.466 23:11:53 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.466 23:11:53 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.466 23:11:53 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:29.466 23:11:53 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.466 23:11:53 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:29.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.466 --rc genhtml_branch_coverage=1 00:06:29.466 --rc genhtml_function_coverage=1 00:06:29.466 --rc genhtml_legend=1 00:06:29.466 --rc geninfo_all_blocks=1 00:06:29.466 --rc geninfo_unexecuted_blocks=1 00:06:29.466 00:06:29.466 ' 00:06:29.466 23:11:53 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:29.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.466 --rc genhtml_branch_coverage=1 00:06:29.466 --rc genhtml_function_coverage=1 00:06:29.466 --rc genhtml_legend=1 00:06:29.466 --rc geninfo_all_blocks=1 00:06:29.466 --rc geninfo_unexecuted_blocks=1 00:06:29.466 00:06:29.466 ' 00:06:29.466 23:11:53 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:29.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.466 --rc genhtml_branch_coverage=1 00:06:29.466 --rc genhtml_function_coverage=1 00:06:29.466 --rc genhtml_legend=1 00:06:29.466 --rc geninfo_all_blocks=1 00:06:29.466 --rc geninfo_unexecuted_blocks=1 00:06:29.466 00:06:29.466 ' 00:06:29.466 23:11:53 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:29.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.466 --rc genhtml_branch_coverage=1 00:06:29.466 --rc genhtml_function_coverage=1 00:06:29.466 --rc genhtml_legend=1 00:06:29.466 --rc geninfo_all_blocks=1 00:06:29.466 --rc geninfo_unexecuted_blocks=1 00:06:29.466 00:06:29.466 ' 00:06:29.466 23:11:53 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:29.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.466 23:11:53 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71300 00:06:29.466 23:11:53 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71300 00:06:29.466 23:11:53 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 71300 ']' 00:06:29.466 23:11:53 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.466 23:11:53 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:29.466 23:11:53 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.466 23:11:53 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.466 23:11:53 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.466 23:11:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:29.728 [2024-11-17 23:11:53.301622] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:29.728 [2024-11-17 23:11:53.301739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71300 ] 00:06:29.728 [2024-11-17 23:11:53.445763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.728 [2024-11-17 23:11:53.474695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.667 23:11:54 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.667 23:11:54 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:30.667 23:11:54 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:30.667 { 00:06:30.667 "version": "SPDK v25.01-pre git sha1 83e8405e4", 00:06:30.667 "fields": { 00:06:30.668 "major": 25, 00:06:30.668 "minor": 1, 00:06:30.668 "patch": 0, 00:06:30.668 "suffix": "-pre", 00:06:30.668 "commit": "83e8405e4" 00:06:30.668 } 00:06:30.668 } 00:06:30.668 23:11:54 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:30.668 23:11:54 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:30.668 23:11:54 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:30.668 23:11:54 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:30.668 23:11:54 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:30.668 23:11:54 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:30.668 23:11:54 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.668 23:11:54 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:30.668 23:11:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:30.668 23:11:54 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.668 23:11:54 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:30.668 23:11:54 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:30.668 23:11:54 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:30.668 23:11:54 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:30.668 23:11:54 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:30.668 23:11:54 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:30.668 23:11:54 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.668 23:11:54 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:30.668 23:11:54 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.668 23:11:54 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:30.668 23:11:54 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.668 23:11:54 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:30.668 23:11:54 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:30.668 23:11:54 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:30.926 request: 00:06:30.926 { 00:06:30.926 "method": "env_dpdk_get_mem_stats", 00:06:30.926 "req_id": 1 00:06:30.926 } 00:06:30.926 Got JSON-RPC error response 00:06:30.926 response: 00:06:30.926 { 00:06:30.926 "code": -32601, 00:06:30.926 "message": "Method not found" 00:06:30.926 } 00:06:30.926 23:11:54 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:30.926 23:11:54 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:30.926 23:11:54 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:30.926 23:11:54 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:30.926 23:11:54 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71300 00:06:30.926 23:11:54 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 71300 ']' 00:06:30.926 23:11:54 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 71300 00:06:30.926 23:11:54 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:30.926 23:11:54 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.926 23:11:54 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71300 00:06:30.926 23:11:54 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.926 killing process with pid 71300 00:06:30.926 23:11:54 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.926 23:11:54 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71300' 00:06:30.927 23:11:54 app_cmdline -- common/autotest_common.sh@973 -- # kill 71300 00:06:30.927 23:11:54 app_cmdline -- common/autotest_common.sh@978 -- # wait 71300 00:06:31.185 00:06:31.185 real 0m1.776s 00:06:31.185 user 0m2.049s 00:06:31.185 sys 0m0.473s 00:06:31.185 23:11:54 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.185 ************************************ 00:06:31.185 END TEST app_cmdline 00:06:31.185 23:11:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:31.185 ************************************ 00:06:31.185 23:11:54 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:31.185 23:11:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.185 23:11:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.185 23:11:54 -- common/autotest_common.sh@10 -- # set +x 00:06:31.185 ************************************ 00:06:31.185 START TEST version 00:06:31.185 ************************************ 00:06:31.185 23:11:54 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:31.185 * Looking for test storage... 00:06:31.185 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:31.185 23:11:54 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:31.185 23:11:54 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:31.185 23:11:54 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:31.452 23:11:55 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:31.452 23:11:55 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.452 23:11:55 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.452 23:11:55 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.452 23:11:55 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.452 23:11:55 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.452 23:11:55 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.452 23:11:55 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.452 23:11:55 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.452 23:11:55 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.452 23:11:55 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.452 23:11:55 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.452 23:11:55 version -- scripts/common.sh@344 -- # case "$op" in 00:06:31.452 23:11:55 version -- scripts/common.sh@345 -- # : 1 00:06:31.452 23:11:55 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.452 23:11:55 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.452 23:11:55 version -- scripts/common.sh@365 -- # decimal 1 00:06:31.452 23:11:55 version -- scripts/common.sh@353 -- # local d=1 00:06:31.452 23:11:55 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.452 23:11:55 version -- scripts/common.sh@355 -- # echo 1 00:06:31.452 23:11:55 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.452 23:11:55 version -- scripts/common.sh@366 -- # decimal 2 00:06:31.452 23:11:55 version -- scripts/common.sh@353 -- # local d=2 00:06:31.452 23:11:55 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.452 23:11:55 version -- scripts/common.sh@355 -- # echo 2 00:06:31.452 23:11:55 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.452 23:11:55 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.452 23:11:55 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.452 23:11:55 version -- scripts/common.sh@368 -- # return 0 00:06:31.452 23:11:55 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.452 23:11:55 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:31.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.452 --rc genhtml_branch_coverage=1 00:06:31.452 --rc genhtml_function_coverage=1 00:06:31.452 --rc genhtml_legend=1 00:06:31.452 --rc geninfo_all_blocks=1 00:06:31.452 --rc geninfo_unexecuted_blocks=1 00:06:31.452 00:06:31.452 ' 00:06:31.452 23:11:55 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:31.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.452 --rc genhtml_branch_coverage=1 00:06:31.452 --rc genhtml_function_coverage=1 00:06:31.452 --rc genhtml_legend=1 00:06:31.452 --rc geninfo_all_blocks=1 00:06:31.452 --rc geninfo_unexecuted_blocks=1 00:06:31.452 00:06:31.452 ' 00:06:31.452 23:11:55 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:31.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.452 --rc genhtml_branch_coverage=1 00:06:31.452 --rc genhtml_function_coverage=1 00:06:31.452 --rc genhtml_legend=1 00:06:31.452 --rc geninfo_all_blocks=1 00:06:31.452 --rc geninfo_unexecuted_blocks=1 00:06:31.452 00:06:31.452 ' 00:06:31.452 23:11:55 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:31.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.452 --rc genhtml_branch_coverage=1 00:06:31.452 --rc genhtml_function_coverage=1 00:06:31.452 --rc genhtml_legend=1 00:06:31.452 --rc geninfo_all_blocks=1 00:06:31.452 --rc geninfo_unexecuted_blocks=1 00:06:31.452 00:06:31.452 ' 00:06:31.452 23:11:55 version -- app/version.sh@17 -- # get_header_version major 00:06:31.452 23:11:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:31.452 23:11:55 version -- app/version.sh@14 -- # cut -f2 00:06:31.452 23:11:55 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.452 23:11:55 version -- app/version.sh@17 -- # major=25 00:06:31.452 23:11:55 version -- app/version.sh@18 -- # get_header_version minor 00:06:31.452 23:11:55 version -- app/version.sh@14 -- # cut -f2 00:06:31.452 23:11:55 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.452 23:11:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:31.452 23:11:55 version -- app/version.sh@18 -- # minor=1 00:06:31.452 23:11:55 version -- app/version.sh@19 -- # get_header_version patch 00:06:31.452 23:11:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:31.452 23:11:55 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.452 23:11:55 version -- app/version.sh@14 -- # cut -f2 00:06:31.452 23:11:55 version -- app/version.sh@19 -- # patch=0 00:06:31.452 23:11:55 version -- app/version.sh@20 -- # get_header_version suffix 00:06:31.452 23:11:55 version -- app/version.sh@14 -- # cut -f2 00:06:31.452 23:11:55 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.452 23:11:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:31.452 23:11:55 version -- app/version.sh@20 -- # suffix=-pre 00:06:31.452 23:11:55 version -- app/version.sh@22 -- # version=25.1 00:06:31.452 23:11:55 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:31.452 23:11:55 version -- app/version.sh@28 -- # version=25.1rc0 00:06:31.452 23:11:55 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:31.452 23:11:55 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:31.452 23:11:55 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:31.452 23:11:55 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:31.452 00:06:31.452 real 0m0.204s 00:06:31.452 user 0m0.127s 00:06:31.452 sys 0m0.098s 00:06:31.452 23:11:55 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.452 23:11:55 version -- common/autotest_common.sh@10 -- # set +x 00:06:31.452 ************************************ 00:06:31.452 END TEST version 00:06:31.452 ************************************ 00:06:31.452 23:11:55 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:31.452 23:11:55 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:31.452 23:11:55 -- spdk/autotest.sh@194 -- # uname -s 00:06:31.452 23:11:55 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:31.452 23:11:55 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:31.452 23:11:55 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:31.452 23:11:55 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:31.452 23:11:55 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:31.452 23:11:55 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:31.452 23:11:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.452 23:11:55 -- common/autotest_common.sh@10 -- # set +x 00:06:31.452 ************************************ 00:06:31.452 START TEST blockdev_nvme 00:06:31.452 ************************************ 00:06:31.452 23:11:55 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:31.452 * Looking for test storage... 00:06:31.452 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:31.452 23:11:55 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:31.452 23:11:55 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:31.452 23:11:55 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:06:31.710 23:11:55 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:31.710 23:11:55 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.710 23:11:55 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.710 23:11:55 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.711 23:11:55 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.711 23:11:55 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.711 23:11:55 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.711 23:11:55 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.711 23:11:55 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.711 23:11:55 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.711 23:11:55 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.711 23:11:55 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.711 23:11:55 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:31.711 23:11:55 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:31.711 23:11:55 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.711 23:11:55 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.711 23:11:55 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:31.711 23:11:55 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:31.711 23:11:55 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.711 23:11:55 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:31.711 23:11:55 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.711 23:11:55 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:31.711 23:11:55 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:31.711 23:11:55 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.711 23:11:55 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:31.711 23:11:55 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.711 23:11:55 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.711 23:11:55 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.711 23:11:55 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:31.711 23:11:55 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.711 23:11:55 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:31.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.711 --rc genhtml_branch_coverage=1 00:06:31.711 --rc genhtml_function_coverage=1 00:06:31.711 --rc genhtml_legend=1 00:06:31.711 --rc geninfo_all_blocks=1 00:06:31.711 --rc geninfo_unexecuted_blocks=1 00:06:31.711 00:06:31.711 ' 00:06:31.711 23:11:55 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:31.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.711 --rc genhtml_branch_coverage=1 00:06:31.711 --rc genhtml_function_coverage=1 00:06:31.711 --rc genhtml_legend=1 00:06:31.711 --rc geninfo_all_blocks=1 00:06:31.711 --rc geninfo_unexecuted_blocks=1 00:06:31.711 00:06:31.711 ' 00:06:31.711 23:11:55 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:31.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.711 --rc genhtml_branch_coverage=1 00:06:31.711 --rc genhtml_function_coverage=1 00:06:31.711 --rc genhtml_legend=1 00:06:31.711 --rc geninfo_all_blocks=1 00:06:31.711 --rc geninfo_unexecuted_blocks=1 00:06:31.711 00:06:31.711 ' 00:06:31.711 23:11:55 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:31.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.711 --rc genhtml_branch_coverage=1 00:06:31.711 --rc genhtml_function_coverage=1 00:06:31.711 --rc genhtml_legend=1 00:06:31.711 --rc geninfo_all_blocks=1 00:06:31.711 --rc geninfo_unexecuted_blocks=1 00:06:31.711 00:06:31.711 ' 00:06:31.711 23:11:55 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:31.711 23:11:55 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:31.711 23:11:55 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:31.711 23:11:55 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:31.711 23:11:55 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:31.711 23:11:55 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:31.711 23:11:55 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:31.711 23:11:55 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:31.711 23:11:55 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:31.711 23:11:55 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:31.711 23:11:55 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:31.711 23:11:55 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:31.711 23:11:55 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:31.711 23:11:55 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:31.711 23:11:55 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:31.711 23:11:55 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:31.711 23:11:55 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:31.711 23:11:55 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:31.711 23:11:55 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:31.711 23:11:55 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:31.711 23:11:55 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:31.711 23:11:55 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:31.711 23:11:55 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:31.711 23:11:55 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:31.711 23:11:55 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71461 00:06:31.711 23:11:55 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:31.711 23:11:55 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 71461 00:06:31.711 23:11:55 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 71461 ']' 00:06:31.711 23:11:55 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.711 23:11:55 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.711 23:11:55 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.711 23:11:55 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.711 23:11:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:31.711 23:11:55 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:31.711 [2024-11-17 23:11:55.403819] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:31.711 [2024-11-17 23:11:55.403950] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71461 ] 00:06:31.971 [2024-11-17 23:11:55.544673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.971 [2024-11-17 23:11:55.563487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.541 23:11:56 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.541 23:11:56 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:06:32.541 23:11:56 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:32.541 23:11:56 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:32.541 23:11:56 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:32.541 23:11:56 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:32.541 23:11:56 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:32.541 23:11:56 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:32.541 23:11:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.541 23:11:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:32.801 23:11:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.801 23:11:56 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:32.801 23:11:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.801 23:11:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:32.801 23:11:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.801 23:11:56 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:32.801 23:11:56 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:32.801 23:11:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.801 23:11:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:33.060 23:11:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.060 23:11:56 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:33.060 23:11:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.060 23:11:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:33.060 23:11:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.060 23:11:56 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:33.060 23:11:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.060 23:11:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:33.060 23:11:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.060 23:11:56 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:33.060 23:11:56 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:33.060 23:11:56 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:33.060 23:11:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.060 23:11:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:33.060 23:11:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.060 23:11:56 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:33.060 23:11:56 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:33.061 23:11:56 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "a2c69db5-968d-406f-864a-39ff79c9b91a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a2c69db5-968d-406f-864a-39ff79c9b91a",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "b30e6a2c-b41b-4901-bac6-c87d92245182"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "b30e6a2c-b41b-4901-bac6-c87d92245182",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "74d92c9a-3eb7-458b-8545-4570410f9d0c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "74d92c9a-3eb7-458b-8545-4570410f9d0c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "c01ee7e9-c92c-4741-9d7a-3d2fe700571e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c01ee7e9-c92c-4741-9d7a-3d2fe700571e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "7c193e68-b271-484c-ac8a-d9e9953730d1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7c193e68-b271-484c-ac8a-d9e9953730d1",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "cb91ae06-ac8c-4ca0-960a-97bebb9903b0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "cb91ae06-ac8c-4ca0-960a-97bebb9903b0",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:33.061 23:11:56 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:33.061 23:11:56 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:33.061 23:11:56 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:33.061 23:11:56 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 71461 00:06:33.061 23:11:56 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 71461 ']' 00:06:33.061 23:11:56 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 71461 00:06:33.061 23:11:56 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:06:33.061 23:11:56 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.061 23:11:56 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71461 00:06:33.061 23:11:56 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:33.061 23:11:56 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:33.061 killing process with pid 71461 00:06:33.061 23:11:56 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71461' 00:06:33.061 23:11:56 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 71461 00:06:33.061 23:11:56 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 71461 00:06:33.320 23:11:57 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:33.320 23:11:57 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:33.320 23:11:57 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:33.320 23:11:57 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.320 23:11:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:33.320 ************************************ 00:06:33.320 START TEST bdev_hello_world 00:06:33.320 ************************************ 00:06:33.320 23:11:57 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:33.320 [2024-11-17 23:11:57.100914] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:33.320 [2024-11-17 23:11:57.101035] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71534 ] 00:06:33.580 [2024-11-17 23:11:57.246662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.580 [2024-11-17 23:11:57.265895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.840 [2024-11-17 23:11:57.635518] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:33.840 [2024-11-17 23:11:57.635564] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:33.840 [2024-11-17 23:11:57.635590] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:33.840 [2024-11-17 23:11:57.637659] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:33.840 [2024-11-17 23:11:57.638573] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:33.840 [2024-11-17 23:11:57.638603] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:33.840 [2024-11-17 23:11:57.639147] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:33.840 00:06:33.840 [2024-11-17 23:11:57.639180] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:34.116 00:06:34.116 real 0m0.750s 00:06:34.116 user 0m0.498s 00:06:34.116 sys 0m0.148s 00:06:34.116 23:11:57 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.116 ************************************ 00:06:34.116 END TEST bdev_hello_world 00:06:34.116 ************************************ 00:06:34.116 23:11:57 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:34.116 23:11:57 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:34.116 23:11:57 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:34.116 23:11:57 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.116 23:11:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:34.116 ************************************ 00:06:34.116 START TEST bdev_bounds 00:06:34.116 ************************************ 00:06:34.116 23:11:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:34.116 Process bdevio pid: 71565 00:06:34.116 23:11:57 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=71565 00:06:34.116 23:11:57 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:34.116 23:11:57 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 71565' 00:06:34.116 23:11:57 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 71565 00:06:34.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.116 23:11:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 71565 ']' 00:06:34.116 23:11:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.116 23:11:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.116 23:11:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.116 23:11:57 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:34.116 23:11:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.116 23:11:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:34.116 [2024-11-17 23:11:57.906141] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:34.116 [2024-11-17 23:11:57.906262] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71565 ] 00:06:34.377 [2024-11-17 23:11:58.050972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:34.377 [2024-11-17 23:11:58.075257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.377 [2024-11-17 23:11:58.075423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.377 [2024-11-17 23:11:58.075502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.949 23:11:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.949 23:11:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:34.949 23:11:58 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:35.211 I/O targets: 00:06:35.211 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:35.211 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:35.211 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:35.211 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:35.211 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:35.211 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:35.211 00:06:35.211 00:06:35.211 CUnit - A unit testing framework for C - Version 2.1-3 00:06:35.211 http://cunit.sourceforge.net/ 00:06:35.211 00:06:35.211 00:06:35.211 Suite: bdevio tests on: Nvme3n1 00:06:35.211 Test: blockdev write read block ...passed 00:06:35.211 Test: blockdev write zeroes read block ...passed 00:06:35.211 Test: blockdev write zeroes read no split ...passed 00:06:35.211 Test: blockdev write zeroes read split ...passed 00:06:35.211 Test: blockdev write zeroes read split partial ...passed 00:06:35.211 Test: blockdev reset ...[2024-11-17 23:11:58.939167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:35.211 [2024-11-17 23:11:58.942969] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:35.211 passed 00:06:35.211 Test: blockdev write read 8 blocks ...passed 00:06:35.211 Test: blockdev write read size > 128k ...passed 00:06:35.211 Test: blockdev write read invalid size ...passed 00:06:35.211 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:35.211 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:35.211 Test: blockdev write read max offset ...passed 00:06:35.211 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:35.211 Test: blockdev writev readv 8 blocks ...passed 00:06:35.211 Test: blockdev writev readv 30 x 1block ...passed 00:06:35.211 Test: blockdev writev readv block ...passed 00:06:35.211 Test: blockdev writev readv size > 128k ...passed 00:06:35.211 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:35.211 Test: blockdev comparev and writev ...[2024-11-17 23:11:58.961749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2f060a000 len:0x1000 00:06:35.211 [2024-11-17 23:11:58.961915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:35.211 passed 00:06:35.211 Test: blockdev nvme passthru rw ...passed 00:06:35.211 Test: blockdev nvme passthru vendor specific ...[2024-11-17 23:11:58.964826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:35.211 passed 00:06:35.211 Test: blockdev nvme admin passthru ...[2024-11-17 23:11:58.964938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:35.211 passed 00:06:35.211 Test: blockdev copy ...passed 00:06:35.211 Suite: bdevio tests on: Nvme2n3 00:06:35.211 Test: blockdev write read block ...passed 00:06:35.211 Test: blockdev write zeroes read block ...passed 00:06:35.211 Test: blockdev write zeroes read no split ...passed 00:06:35.211 Test: blockdev write zeroes read split ...passed 00:06:35.211 Test: blockdev write zeroes read split partial ...passed 00:06:35.211 Test: blockdev reset ...[2024-11-17 23:11:58.995755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:35.211 [2024-11-17 23:11:58.998596] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:35.211 passed 00:06:35.211 Test: blockdev write read 8 blocks ...passed 00:06:35.211 Test: blockdev write read size > 128k ...passed 00:06:35.211 Test: blockdev write read invalid size ...passed 00:06:35.211 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:35.211 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:35.211 Test: blockdev write read max offset ...passed 00:06:35.211 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:35.211 Test: blockdev writev readv 8 blocks ...passed 00:06:35.211 Test: blockdev writev readv 30 x 1block ...passed 00:06:35.211 Test: blockdev writev readv block ...passed 00:06:35.211 Test: blockdev writev readv size > 128k ...passed 00:06:35.211 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:35.211 Test: blockdev comparev and writev ...[2024-11-17 23:11:59.015063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2f0603000 len:0x1000 00:06:35.211 [2024-11-17 23:11:59.015132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:35.211 passed 00:06:35.211 Test: blockdev nvme passthru rw ...passed 00:06:35.211 Test: blockdev nvme passthru vendor specific ...[2024-11-17 23:11:59.017630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:35.211 [2024-11-17 23:11:59.017673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:35.211 passed 00:06:35.211 Test: blockdev nvme admin passthru ...passed 00:06:35.475 Test: blockdev copy ...passed 00:06:35.475 Suite: bdevio tests on: Nvme2n2 00:06:35.475 Test: blockdev write read block ...passed 00:06:35.475 Test: blockdev write zeroes read block ...passed 00:06:35.475 Test: blockdev write zeroes read no split ...passed 00:06:35.475 Test: blockdev write zeroes read split ...passed 00:06:35.475 Test: blockdev write zeroes read split partial ...passed 00:06:35.475 Test: blockdev reset ...[2024-11-17 23:11:59.052431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:35.475 [2024-11-17 23:11:59.055684] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:35.475 passed 00:06:35.475 Test: blockdev write read 8 blocks ...passed 00:06:35.475 Test: blockdev write read size > 128k ...passed 00:06:35.475 Test: blockdev write read invalid size ...passed 00:06:35.475 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:35.475 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:35.475 Test: blockdev write read max offset ...passed 00:06:35.475 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:35.475 Test: blockdev writev readv 8 blocks ...passed 00:06:35.475 Test: blockdev writev readv 30 x 1block ...passed 00:06:35.476 Test: blockdev writev readv block ...passed 00:06:35.476 Test: blockdev writev readv size > 128k ...passed 00:06:35.476 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:35.476 Test: blockdev comparev and writev ...[2024-11-17 23:11:59.073277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2f0603000 len:0x1000 00:06:35.476 [2024-11-17 23:11:59.073339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:35.476 passed 00:06:35.476 Test: blockdev nvme passthru rw ...passed 00:06:35.476 Test: blockdev nvme passthru vendor specific ...[2024-11-17 23:11:59.075947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:35.476 [2024-11-17 23:11:59.075991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:35.476 passed 00:06:35.476 Test: blockdev nvme admin passthru ...passed 00:06:35.476 Test: blockdev copy ...passed 00:06:35.476 Suite: bdevio tests on: Nvme2n1 00:06:35.476 Test: blockdev write read block ...passed 00:06:35.476 Test: blockdev write zeroes read block ...passed 00:06:35.476 Test: blockdev write zeroes read no split ...passed 00:06:35.476 Test: blockdev write zeroes read split ...passed 00:06:35.476 Test: blockdev write zeroes read split partial ...passed 00:06:35.476 Test: blockdev reset ...[2024-11-17 23:11:59.108499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:35.476 [2024-11-17 23:11:59.110906] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:35.476 passed 00:06:35.476 Test: blockdev write read 8 blocks ...passed 00:06:35.476 Test: blockdev write read size > 128k ...passed 00:06:35.476 Test: blockdev write read invalid size ...passed 00:06:35.476 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:35.476 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:35.476 Test: blockdev write read max offset ...passed 00:06:35.476 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:35.476 Test: blockdev writev readv 8 blocks ...passed 00:06:35.476 Test: blockdev writev readv 30 x 1block ...passed 00:06:35.476 Test: blockdev writev readv block ...passed 00:06:35.476 Test: blockdev writev readv size > 128k ...passed 00:06:35.476 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:35.476 Test: blockdev comparev and writev ...[2024-11-17 23:11:59.127799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2f0603000 len:0x1000 00:06:35.476 [2024-11-17 23:11:59.127859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:35.476 passed 00:06:35.476 Test: blockdev nvme passthru rw ...passed 00:06:35.476 Test: blockdev nvme passthru vendor specific ...[2024-11-17 23:11:59.130311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:35.476 [2024-11-17 23:11:59.130353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:35.476 passed 00:06:35.476 Test: blockdev nvme admin passthru ...passed 00:06:35.477 Test: blockdev copy ...passed 00:06:35.477 Suite: bdevio tests on: Nvme1n1 00:06:35.477 Test: blockdev write read block ...passed 00:06:35.477 Test: blockdev write zeroes read block ...passed 00:06:35.477 Test: blockdev write zeroes read no split ...passed 00:06:35.477 Test: blockdev write zeroes read split ...passed 00:06:35.477 Test: blockdev write zeroes read split partial ...passed 00:06:35.477 Test: blockdev reset ...[2024-11-17 23:11:59.162641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:35.477 [2024-11-17 23:11:59.165250] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:35.477 passed 00:06:35.477 Test: blockdev write read 8 blocks ...passed 00:06:35.477 Test: blockdev write read size > 128k ...passed 00:06:35.477 Test: blockdev write read invalid size ...passed 00:06:35.477 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:35.477 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:35.477 Test: blockdev write read max offset ...passed 00:06:35.477 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:35.477 Test: blockdev writev readv 8 blocks ...passed 00:06:35.477 Test: blockdev writev readv 30 x 1block ...passed 00:06:35.477 Test: blockdev writev readv block ...passed 00:06:35.477 Test: blockdev writev readv size > 128k ...passed 00:06:35.477 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:35.477 Test: blockdev comparev and writev ...[2024-11-17 23:11:59.182510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d1436000 len:0x1000 00:06:35.477 [2024-11-17 23:11:59.182569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:35.477 passed 00:06:35.477 Test: blockdev nvme passthru rw ...passed 00:06:35.477 Test: blockdev nvme passthru vendor specific ...[2024-11-17 23:11:59.184771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:35.477 [2024-11-17 23:11:59.184814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:35.477 passed 00:06:35.477 Test: blockdev nvme admin passthru ...passed 00:06:35.477 Test: blockdev copy ...passed 00:06:35.477 Suite: bdevio tests on: Nvme0n1 00:06:35.477 Test: blockdev write read block ...passed 00:06:35.477 Test: blockdev write zeroes read block ...passed 00:06:35.477 Test: blockdev write zeroes read no split ...passed 00:06:35.477 Test: blockdev write zeroes read split ...passed 00:06:35.477 Test: blockdev write zeroes read split partial ...passed 00:06:35.477 Test: blockdev reset ...[2024-11-17 23:11:59.219531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:35.477 [2024-11-17 23:11:59.222638] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:35.477 passed 00:06:35.477 Test: blockdev write read 8 blocks ...passed 00:06:35.477 Test: blockdev write read size > 128k ...passed 00:06:35.477 Test: blockdev write read invalid size ...passed 00:06:35.477 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:35.477 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:35.477 Test: blockdev write read max offset ...passed 00:06:35.477 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:35.477 Test: blockdev writev readv 8 blocks ...passed 00:06:35.477 Test: blockdev writev readv 30 x 1block ...passed 00:06:35.477 Test: blockdev writev readv block ...passed 00:06:35.477 Test: blockdev writev readv size > 128k ...passed 00:06:35.477 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:35.477 Test: blockdev comparev and writev ...passed 00:06:35.478 Test: blockdev nvme passthru rw ...[2024-11-17 23:11:59.237598] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:35.478 separate metadata which is not supported yet. 00:06:35.478 passed 00:06:35.478 Test: blockdev nvme passthru vendor specific ...[2024-11-17 23:11:59.238923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:35.478 [2024-11-17 23:11:59.238975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:35.478 passed 00:06:35.478 Test: blockdev nvme admin passthru ...passed 00:06:35.478 Test: blockdev copy ...passed 00:06:35.478 00:06:35.478 Run Summary: Type Total Ran Passed Failed Inactive 00:06:35.478 suites 6 6 n/a 0 0 00:06:35.478 tests 138 138 138 0 0 00:06:35.478 asserts 893 893 893 0 n/a 00:06:35.478 00:06:35.478 Elapsed time = 0.735 seconds 00:06:35.478 0 00:06:35.478 23:11:59 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 71565 00:06:35.478 23:11:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 71565 ']' 00:06:35.478 23:11:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 71565 00:06:35.478 23:11:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:35.478 23:11:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.478 23:11:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71565 00:06:35.478 23:11:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:35.478 killing process with pid 71565 00:06:35.478 23:11:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:35.478 23:11:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71565' 00:06:35.478 23:11:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 71565 00:06:35.478 23:11:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 71565 00:06:35.743 23:11:59 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:35.743 00:06:35.743 real 0m1.607s 00:06:35.743 user 0m4.060s 00:06:35.743 sys 0m0.358s 00:06:35.743 23:11:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.743 23:11:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:35.743 ************************************ 00:06:35.743 END TEST bdev_bounds 00:06:35.743 ************************************ 00:06:35.743 23:11:59 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:35.743 23:11:59 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:35.743 23:11:59 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.743 23:11:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:35.743 ************************************ 00:06:35.743 START TEST bdev_nbd 00:06:35.743 ************************************ 00:06:35.743 23:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:35.743 23:11:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:35.743 23:11:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:35.743 23:11:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.743 23:11:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:35.743 23:11:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:35.743 23:11:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:35.743 23:11:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:35.743 23:11:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:35.743 23:11:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:35.743 23:11:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:35.743 23:11:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:35.743 23:11:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:35.743 23:11:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:35.743 23:11:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:35.743 23:11:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:35.743 23:11:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=71614 00:06:35.743 23:11:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:35.743 23:11:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 71614 /var/tmp/spdk-nbd.sock 00:06:35.743 23:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 71614 ']' 00:06:35.743 23:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:35.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:35.743 23:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.743 23:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:35.743 23:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.743 23:11:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:35.743 23:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:36.003 [2024-11-17 23:11:59.604989] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:36.003 [2024-11-17 23:11:59.605554] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:36.003 [2024-11-17 23:11:59.754200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.003 [2024-11-17 23:11:59.785780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.945 23:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.945 23:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:36.945 23:12:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:36.945 23:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.945 23:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:36.945 23:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:36.945 23:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:36.945 23:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.945 23:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:36.945 23:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:36.945 23:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:36.945 23:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:36.946 23:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:36.946 23:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:36.946 23:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:36.946 23:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:36.946 23:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:36.946 23:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:36.946 23:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:36.946 23:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:36.946 23:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:36.946 23:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:36.946 23:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:36.946 23:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:36.946 23:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:36.946 23:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:36.946 23:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:36.946 1+0 records in 00:06:36.946 1+0 records out 00:06:36.946 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106709 s, 3.8 MB/s 00:06:36.946 23:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.946 23:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:36.946 23:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.946 23:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:36.946 23:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:36.946 23:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:36.946 23:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:36.946 23:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:37.206 23:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:37.206 23:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:37.206 23:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:37.206 23:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:37.206 23:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:37.206 23:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:37.206 23:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:37.206 23:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:37.206 23:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:37.206 23:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:37.206 23:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:37.206 23:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:37.206 1+0 records in 00:06:37.206 1+0 records out 00:06:37.206 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000992409 s, 4.1 MB/s 00:06:37.206 23:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.206 23:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:37.206 23:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.206 23:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:37.206 23:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:37.206 23:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:37.206 23:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:37.206 23:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:37.467 23:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:37.467 23:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:37.467 23:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:37.467 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:37.467 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:37.467 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:37.467 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:37.467 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:37.467 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:37.467 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:37.467 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:37.467 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:37.467 1+0 records in 00:06:37.467 1+0 records out 00:06:37.467 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113297 s, 3.6 MB/s 00:06:37.467 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.467 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:37.467 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.467 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:37.467 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:37.467 23:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:37.467 23:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:37.467 23:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:37.728 23:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:37.728 23:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:37.728 23:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:37.728 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:37.728 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:37.728 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:37.728 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:37.728 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:37.728 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:37.728 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:37.728 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:37.728 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:37.728 1+0 records in 00:06:37.728 1+0 records out 00:06:37.728 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00156455 s, 2.6 MB/s 00:06:37.728 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.728 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:37.728 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.728 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:37.728 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:37.728 23:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:37.728 23:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:37.728 23:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:37.989 23:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:37.989 23:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:37.989 23:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:37.989 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:37.989 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:37.989 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:37.989 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:37.989 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:37.989 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:37.989 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:37.989 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:37.989 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:37.989 1+0 records in 00:06:37.989 1+0 records out 00:06:37.989 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000887925 s, 4.6 MB/s 00:06:37.989 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.989 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:37.989 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.989 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:37.989 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:37.989 23:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:37.989 23:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:37.989 23:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:38.249 23:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:38.249 23:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:38.249 23:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:38.249 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:38.249 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:38.249 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:38.249 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:38.250 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:38.250 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:38.250 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:38.250 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:38.250 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:38.250 1+0 records in 00:06:38.250 1+0 records out 00:06:38.250 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00130911 s, 3.1 MB/s 00:06:38.250 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:38.250 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:38.250 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:38.250 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:38.250 23:12:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:38.250 23:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:38.250 23:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:38.250 23:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:38.512 23:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:38.512 { 00:06:38.512 "nbd_device": "/dev/nbd0", 00:06:38.512 "bdev_name": "Nvme0n1" 00:06:38.512 }, 00:06:38.512 { 00:06:38.512 "nbd_device": "/dev/nbd1", 00:06:38.512 "bdev_name": "Nvme1n1" 00:06:38.512 }, 00:06:38.512 { 00:06:38.512 "nbd_device": "/dev/nbd2", 00:06:38.512 "bdev_name": "Nvme2n1" 00:06:38.512 }, 00:06:38.512 { 00:06:38.512 "nbd_device": "/dev/nbd3", 00:06:38.512 "bdev_name": "Nvme2n2" 00:06:38.512 }, 00:06:38.512 { 00:06:38.512 "nbd_device": "/dev/nbd4", 00:06:38.512 "bdev_name": "Nvme2n3" 00:06:38.512 }, 00:06:38.512 { 00:06:38.512 "nbd_device": "/dev/nbd5", 00:06:38.512 "bdev_name": "Nvme3n1" 00:06:38.512 } 00:06:38.512 ]' 00:06:38.512 23:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:38.512 23:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:38.512 { 00:06:38.512 "nbd_device": "/dev/nbd0", 00:06:38.512 "bdev_name": "Nvme0n1" 00:06:38.512 }, 00:06:38.512 { 00:06:38.512 "nbd_device": "/dev/nbd1", 00:06:38.512 "bdev_name": "Nvme1n1" 00:06:38.512 }, 00:06:38.512 { 00:06:38.512 "nbd_device": "/dev/nbd2", 00:06:38.512 "bdev_name": "Nvme2n1" 00:06:38.512 }, 00:06:38.512 { 00:06:38.512 "nbd_device": "/dev/nbd3", 00:06:38.512 "bdev_name": "Nvme2n2" 00:06:38.512 }, 00:06:38.512 { 00:06:38.512 "nbd_device": "/dev/nbd4", 00:06:38.512 "bdev_name": "Nvme2n3" 00:06:38.512 }, 00:06:38.512 { 00:06:38.512 "nbd_device": "/dev/nbd5", 00:06:38.512 "bdev_name": "Nvme3n1" 00:06:38.512 } 00:06:38.512 ]' 00:06:38.512 23:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:38.512 23:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:38.512 23:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.512 23:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:38.512 23:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:38.512 23:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:38.512 23:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.512 23:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:38.773 23:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:38.773 23:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:38.773 23:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:38.773 23:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.773 23:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.773 23:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:38.773 23:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:38.773 23:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.773 23:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.773 23:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:39.034 23:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:39.034 23:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:39.034 23:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:39.034 23:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.034 23:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.034 23:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:39.034 23:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:39.034 23:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.034 23:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.034 23:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:39.301 23:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:39.301 23:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:39.301 23:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:39.302 23:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.302 23:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.302 23:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:39.302 23:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:39.302 23:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.302 23:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.302 23:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:39.567 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:39.567 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:39.567 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:39.567 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.567 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.567 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:39.567 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:39.567 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.567 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.567 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:39.828 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:39.828 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:39.828 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:39.828 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.828 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.828 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:39.828 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:39.828 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.828 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.828 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:39.828 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:39.828 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:39.828 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:39.828 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.828 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.828 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:39.828 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:39.828 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.828 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:39.828 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.828 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:40.088 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:40.088 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.088 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:40.088 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:40.088 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.088 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:40.088 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:40.088 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:40.088 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:40.088 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:40.088 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:40.088 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:40.088 23:12:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:40.088 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.088 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:40.088 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:40.088 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:40.088 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:40.088 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:40.088 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.088 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:40.088 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:40.088 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:40.088 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:40.088 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:40.088 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:40.088 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:40.088 23:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:40.348 /dev/nbd0 00:06:40.348 23:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:40.348 23:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:40.348 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:40.348 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:40.348 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:40.348 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:40.348 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:40.348 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:40.348 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:40.348 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:40.348 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:40.348 1+0 records in 00:06:40.348 1+0 records out 00:06:40.348 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000506725 s, 8.1 MB/s 00:06:40.348 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:40.348 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:40.348 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:40.348 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:40.348 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:40.348 23:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.348 23:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:40.348 23:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:40.608 /dev/nbd1 00:06:40.608 23:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:40.609 23:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:40.609 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:40.609 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:40.609 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:40.609 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:40.609 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:40.609 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:40.609 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:40.609 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:40.609 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:40.609 1+0 records in 00:06:40.609 1+0 records out 00:06:40.609 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375497 s, 10.9 MB/s 00:06:40.609 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:40.609 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:40.609 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:40.609 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:40.609 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:40.609 23:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.609 23:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:40.609 23:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:40.867 /dev/nbd10 00:06:40.867 23:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:40.867 23:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:40.867 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:40.867 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:40.867 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:40.867 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:40.867 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:40.867 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:40.867 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:40.867 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:40.867 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:40.867 1+0 records in 00:06:40.867 1+0 records out 00:06:40.867 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372488 s, 11.0 MB/s 00:06:40.867 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:40.867 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:40.867 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:40.867 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:40.867 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:40.867 23:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.867 23:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:40.867 23:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:41.127 /dev/nbd11 00:06:41.127 23:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:41.127 23:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:41.127 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:41.127 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:41.127 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:41.127 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:41.127 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:41.127 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:41.127 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:41.127 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:41.127 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:41.127 1+0 records in 00:06:41.127 1+0 records out 00:06:41.127 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300672 s, 13.6 MB/s 00:06:41.127 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.127 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:41.127 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.127 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:41.127 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:41.127 23:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.127 23:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:41.127 23:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:41.388 /dev/nbd12 00:06:41.388 23:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:41.388 23:12:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:41.388 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:41.388 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:41.388 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:41.388 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:41.388 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:41.388 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:41.388 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:41.388 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:41.388 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:41.388 1+0 records in 00:06:41.388 1+0 records out 00:06:41.388 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000550211 s, 7.4 MB/s 00:06:41.388 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.388 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:41.388 23:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.388 23:12:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:41.388 23:12:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:41.388 23:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.388 23:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:41.388 23:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:41.388 /dev/nbd13 00:06:41.654 23:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:41.654 23:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:41.654 23:12:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:41.654 23:12:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:41.654 23:12:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:41.654 23:12:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:41.654 23:12:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:41.654 23:12:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:41.654 23:12:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:41.654 23:12:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:41.654 23:12:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:41.654 1+0 records in 00:06:41.654 1+0 records out 00:06:41.654 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363039 s, 11.3 MB/s 00:06:41.654 23:12:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.654 23:12:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:41.654 23:12:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.654 23:12:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:41.654 23:12:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:41.654 23:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.654 23:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:41.654 23:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.654 23:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.654 23:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.654 23:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:41.654 { 00:06:41.654 "nbd_device": "/dev/nbd0", 00:06:41.654 "bdev_name": "Nvme0n1" 00:06:41.654 }, 00:06:41.654 { 00:06:41.654 "nbd_device": "/dev/nbd1", 00:06:41.654 "bdev_name": "Nvme1n1" 00:06:41.654 }, 00:06:41.654 { 00:06:41.654 "nbd_device": "/dev/nbd10", 00:06:41.654 "bdev_name": "Nvme2n1" 00:06:41.654 }, 00:06:41.654 { 00:06:41.654 "nbd_device": "/dev/nbd11", 00:06:41.654 "bdev_name": "Nvme2n2" 00:06:41.654 }, 00:06:41.654 { 00:06:41.654 "nbd_device": "/dev/nbd12", 00:06:41.654 "bdev_name": "Nvme2n3" 00:06:41.654 }, 00:06:41.654 { 00:06:41.654 "nbd_device": "/dev/nbd13", 00:06:41.654 "bdev_name": "Nvme3n1" 00:06:41.654 } 00:06:41.654 ]' 00:06:41.654 23:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:41.654 { 00:06:41.654 "nbd_device": "/dev/nbd0", 00:06:41.654 "bdev_name": "Nvme0n1" 00:06:41.654 }, 00:06:41.654 { 00:06:41.654 "nbd_device": "/dev/nbd1", 00:06:41.654 "bdev_name": "Nvme1n1" 00:06:41.654 }, 00:06:41.654 { 00:06:41.654 "nbd_device": "/dev/nbd10", 00:06:41.654 "bdev_name": "Nvme2n1" 00:06:41.654 }, 00:06:41.654 { 00:06:41.654 "nbd_device": "/dev/nbd11", 00:06:41.654 "bdev_name": "Nvme2n2" 00:06:41.654 }, 00:06:41.654 { 00:06:41.654 "nbd_device": "/dev/nbd12", 00:06:41.654 "bdev_name": "Nvme2n3" 00:06:41.654 }, 00:06:41.654 { 00:06:41.654 "nbd_device": "/dev/nbd13", 00:06:41.654 "bdev_name": "Nvme3n1" 00:06:41.654 } 00:06:41.654 ]' 00:06:41.654 23:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:41.654 23:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:41.654 /dev/nbd1 00:06:41.654 /dev/nbd10 00:06:41.654 /dev/nbd11 00:06:41.654 /dev/nbd12 00:06:41.654 /dev/nbd13' 00:06:41.914 23:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:41.914 /dev/nbd1 00:06:41.914 /dev/nbd10 00:06:41.914 /dev/nbd11 00:06:41.914 /dev/nbd12 00:06:41.914 /dev/nbd13' 00:06:41.914 23:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:41.914 23:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:41.914 23:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:41.914 23:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:41.914 23:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:41.914 23:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:41.914 23:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:41.914 23:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:41.914 23:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:41.914 23:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:41.914 23:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:41.914 23:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:41.914 256+0 records in 00:06:41.914 256+0 records out 00:06:41.914 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00684393 s, 153 MB/s 00:06:41.914 23:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.914 23:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:41.914 256+0 records in 00:06:41.914 256+0 records out 00:06:41.914 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0538249 s, 19.5 MB/s 00:06:41.914 23:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.914 23:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:41.914 256+0 records in 00:06:41.914 256+0 records out 00:06:41.914 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.049155 s, 21.3 MB/s 00:06:41.914 23:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.914 23:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:42.175 256+0 records in 00:06:42.175 256+0 records out 00:06:42.175 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.207052 s, 5.1 MB/s 00:06:42.175 23:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.175 23:12:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:42.436 256+0 records in 00:06:42.436 256+0 records out 00:06:42.436 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.240368 s, 4.4 MB/s 00:06:42.436 23:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.436 23:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:42.697 256+0 records in 00:06:42.697 256+0 records out 00:06:42.697 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.247231 s, 4.2 MB/s 00:06:42.697 23:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.697 23:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:42.958 256+0 records in 00:06:42.958 256+0 records out 00:06:42.958 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.221106 s, 4.7 MB/s 00:06:42.958 23:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:42.958 23:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:42.958 23:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:42.958 23:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:42.958 23:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:42.958 23:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:42.958 23:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:42.958 23:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.958 23:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:42.958 23:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.959 23:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:42.959 23:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.959 23:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:42.959 23:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.959 23:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:42.959 23:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.959 23:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:42.959 23:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.959 23:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:42.959 23:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:42.959 23:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:42.959 23:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.959 23:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:42.959 23:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:42.959 23:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:42.959 23:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.959 23:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:43.219 23:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:43.219 23:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:43.219 23:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:43.219 23:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:43.219 23:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:43.219 23:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:43.219 23:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:43.219 23:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:43.219 23:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.219 23:12:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:43.479 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:43.479 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:43.479 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:43.479 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:43.479 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:43.479 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:43.479 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:43.479 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:43.479 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.479 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:43.479 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:43.479 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:43.479 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:43.479 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:43.479 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:43.479 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:43.479 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:43.479 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:43.479 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.479 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:43.739 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:43.739 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:43.739 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:43.739 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:43.739 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:43.739 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:43.739 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:43.739 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:43.739 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.739 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:43.999 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:43.999 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:43.999 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:43.999 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:43.999 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:43.999 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:43.999 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:43.999 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:43.999 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.999 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:44.259 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:44.259 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:44.259 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:44.259 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.259 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.259 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:44.259 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:44.259 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.259 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:44.259 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.259 23:12:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:44.520 23:12:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:44.520 23:12:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:44.520 23:12:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:44.520 23:12:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:44.520 23:12:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:44.520 23:12:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:44.520 23:12:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:44.520 23:12:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:44.520 23:12:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:44.520 23:12:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:44.520 23:12:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:44.520 23:12:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:44.520 23:12:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:44.520 23:12:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.520 23:12:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:44.520 23:12:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:44.781 malloc_lvol_verify 00:06:44.781 23:12:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:44.781 cecaad32-3270-4808-ab47-ee9da3b65a15 00:06:45.041 23:12:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:45.041 504b1f51-7540-4c4c-8ec6-7b53fd2c60c0 00:06:45.041 23:12:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:45.303 /dev/nbd0 00:06:45.303 23:12:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:45.303 23:12:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:45.303 23:12:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:45.303 23:12:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:45.303 23:12:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:45.303 mke2fs 1.47.0 (5-Feb-2023) 00:06:45.303 Discarding device blocks: 0/4096 done 00:06:45.303 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:45.303 00:06:45.303 Allocating group tables: 0/1 done 00:06:45.303 Writing inode tables: 0/1 done 00:06:45.303 Creating journal (1024 blocks): done 00:06:45.303 Writing superblocks and filesystem accounting information: 0/1 done 00:06:45.303 00:06:45.303 23:12:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:45.303 23:12:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.303 23:12:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:45.303 23:12:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:45.303 23:12:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:45.303 23:12:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.303 23:12:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:45.566 23:12:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:45.566 23:12:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:45.566 23:12:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:45.566 23:12:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.566 23:12:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.566 23:12:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:45.566 23:12:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:45.566 23:12:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.566 23:12:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 71614 00:06:45.566 23:12:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 71614 ']' 00:06:45.566 23:12:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 71614 00:06:45.566 23:12:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:45.566 23:12:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.566 23:12:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71614 00:06:45.566 killing process with pid 71614 00:06:45.566 23:12:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.566 23:12:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.566 23:12:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71614' 00:06:45.566 23:12:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 71614 00:06:45.566 23:12:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 71614 00:06:45.825 ************************************ 00:06:45.825 END TEST bdev_nbd 00:06:45.825 ************************************ 00:06:45.825 23:12:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:45.825 00:06:45.825 real 0m9.917s 00:06:45.825 user 0m14.141s 00:06:45.825 sys 0m3.345s 00:06:45.825 23:12:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.825 23:12:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:45.825 23:12:09 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:45.825 23:12:09 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:06:45.825 skipping fio tests on NVMe due to multi-ns failures. 00:06:45.825 23:12:09 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:45.825 23:12:09 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:45.825 23:12:09 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:45.825 23:12:09 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:45.825 23:12:09 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.825 23:12:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:45.825 ************************************ 00:06:45.825 START TEST bdev_verify 00:06:45.825 ************************************ 00:06:45.825 23:12:09 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:45.825 [2024-11-17 23:12:09.566762] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:45.825 [2024-11-17 23:12:09.566903] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71987 ] 00:06:46.086 [2024-11-17 23:12:09.709933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:46.086 [2024-11-17 23:12:09.729503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.086 [2024-11-17 23:12:09.729524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.346 Running I/O for 5 seconds... 00:06:48.784 21248.00 IOPS, 83.00 MiB/s [2024-11-17T23:12:13.546Z] 21152.00 IOPS, 82.62 MiB/s [2024-11-17T23:12:14.484Z] 21440.00 IOPS, 83.75 MiB/s [2024-11-17T23:12:15.427Z] 20992.00 IOPS, 82.00 MiB/s [2024-11-17T23:12:15.427Z] 21350.40 IOPS, 83.40 MiB/s 00:06:51.606 Latency(us) 00:06:51.606 [2024-11-17T23:12:15.427Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:51.606 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:51.606 Verification LBA range: start 0x0 length 0xbd0bd 00:06:51.606 Nvme0n1 : 5.06 1734.17 6.77 0.00 0.00 73477.65 7864.32 84289.38 00:06:51.606 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:51.606 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:51.606 Nvme0n1 : 5.04 1776.36 6.94 0.00 0.00 71856.45 16232.76 77030.01 00:06:51.606 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:51.606 Verification LBA range: start 0x0 length 0xa0000 00:06:51.606 Nvme1n1 : 5.07 1741.34 6.80 0.00 0.00 73088.13 13510.50 69770.63 00:06:51.606 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:51.606 Verification LBA range: start 0xa0000 length 0xa0000 00:06:51.606 Nvme1n1 : 5.05 1775.87 6.94 0.00 0.00 71788.75 18450.90 70980.53 00:06:51.606 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:51.606 Verification LBA range: start 0x0 length 0x80000 00:06:51.606 Nvme2n1 : 5.08 1740.24 6.80 0.00 0.00 72957.93 14922.04 60494.77 00:06:51.606 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:51.606 Verification LBA range: start 0x80000 length 0x80000 00:06:51.606 Nvme2n1 : 5.05 1775.36 6.94 0.00 0.00 71661.25 19761.62 64931.05 00:06:51.606 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:51.606 Verification LBA range: start 0x0 length 0x80000 00:06:51.606 Nvme2n2 : 5.08 1739.78 6.80 0.00 0.00 72797.32 15325.34 60091.47 00:06:51.606 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:51.606 Verification LBA range: start 0x80000 length 0x80000 00:06:51.606 Nvme2n2 : 5.07 1781.25 6.96 0.00 0.00 71244.56 5999.06 57268.38 00:06:51.606 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:51.606 Verification LBA range: start 0x0 length 0x80000 00:06:51.606 Nvme2n3 : 5.08 1739.26 6.79 0.00 0.00 72685.27 14922.04 61704.66 00:06:51.606 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:51.606 Verification LBA range: start 0x80000 length 0x80000 00:06:51.606 Nvme2n3 : 5.08 1789.75 6.99 0.00 0.00 70814.93 8822.15 58478.28 00:06:51.606 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:51.606 Verification LBA range: start 0x0 length 0x20000 00:06:51.606 Nvme3n1 : 5.08 1737.80 6.79 0.00 0.00 72641.49 15123.69 64527.75 00:06:51.606 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:51.606 Verification LBA range: start 0x20000 length 0x20000 00:06:51.606 Nvme3n1 : 5.08 1788.27 6.99 0.00 0.00 70743.22 10838.65 55655.19 00:06:51.606 [2024-11-17T23:12:15.427Z] =================================================================================================================== 00:06:51.606 [2024-11-17T23:12:15.427Z] Total : 21119.45 82.50 0.00 0.00 72136.10 5999.06 84289.38 00:06:52.190 00:06:52.190 real 0m6.456s 00:06:52.190 user 0m12.200s 00:06:52.190 sys 0m0.196s 00:06:52.190 23:12:15 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.190 23:12:15 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:52.190 ************************************ 00:06:52.190 END TEST bdev_verify 00:06:52.190 ************************************ 00:06:52.459 23:12:16 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:52.459 23:12:16 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:52.459 23:12:16 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.459 23:12:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:52.459 ************************************ 00:06:52.459 START TEST bdev_verify_big_io 00:06:52.459 ************************************ 00:06:52.459 23:12:16 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:52.459 [2024-11-17 23:12:16.090409] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:52.459 [2024-11-17 23:12:16.090537] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72080 ] 00:06:52.459 [2024-11-17 23:12:16.238280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:52.459 [2024-11-17 23:12:16.261786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.459 [2024-11-17 23:12:16.261820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.029 Running I/O for 5 seconds... 00:06:58.683 1949.00 IOPS, 121.81 MiB/s [2024-11-17T23:12:22.762Z] 2314.00 IOPS, 144.62 MiB/s [2024-11-17T23:12:23.330Z] 2465.33 IOPS, 154.08 MiB/s 00:06:59.509 Latency(us) 00:06:59.509 [2024-11-17T23:12:23.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:59.509 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:59.509 Verification LBA range: start 0x0 length 0xbd0b 00:06:59.509 Nvme0n1 : 5.83 84.17 5.26 0.00 0.00 1441749.29 19862.45 1509949.44 00:06:59.509 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:59.509 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:59.509 Nvme0n1 : 5.59 137.39 8.59 0.00 0.00 904426.08 46379.32 929199.66 00:06:59.509 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:59.509 Verification LBA range: start 0x0 length 0xa000 00:06:59.509 Nvme1n1 : 5.84 87.72 5.48 0.00 0.00 1332769.87 104857.60 1290555.08 00:06:59.509 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:59.509 Verification LBA range: start 0xa000 length 0xa000 00:06:59.509 Nvme1n1 : 5.59 137.31 8.58 0.00 0.00 879198.00 120182.94 884030.23 00:06:59.509 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:59.509 Verification LBA range: start 0x0 length 0x8000 00:06:59.509 Nvme2n1 : 5.89 90.87 5.68 0.00 0.00 1223806.73 47790.87 1316366.18 00:06:59.509 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:59.509 Verification LBA range: start 0x8000 length 0x8000 00:06:59.509 Nvme2n1 : 5.67 139.09 8.69 0.00 0.00 843093.39 68560.74 903388.55 00:06:59.509 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:59.509 Verification LBA range: start 0x0 length 0x8000 00:06:59.509 Nvme2n2 : 5.97 107.24 6.70 0.00 0.00 1002689.46 28230.89 1348630.06 00:06:59.509 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:59.509 Verification LBA range: start 0x8000 length 0x8000 00:06:59.509 Nvme2n2 : 5.73 145.12 9.07 0.00 0.00 791294.12 63721.16 929199.66 00:06:59.509 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:59.509 Verification LBA range: start 0x0 length 0x8000 00:06:59.509 Nvme2n3 : 6.06 130.62 8.16 0.00 0.00 792472.88 13510.50 1432516.14 00:06:59.509 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:59.509 Verification LBA range: start 0x8000 length 0x8000 00:06:59.509 Nvme2n3 : 5.82 153.93 9.62 0.00 0.00 729026.45 41539.74 955010.76 00:06:59.509 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:59.509 Verification LBA range: start 0x0 length 0x2000 00:06:59.509 Nvme3n1 : 6.29 240.04 15.00 0.00 0.00 414764.53 460.01 1438968.91 00:06:59.509 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:59.509 Verification LBA range: start 0x2000 length 0x2000 00:06:59.509 Nvme3n1 : 5.83 164.66 10.29 0.00 0.00 664331.15 1852.65 961463.53 00:06:59.509 [2024-11-17T23:12:23.330Z] =================================================================================================================== 00:06:59.509 [2024-11-17T23:12:23.330Z] Total : 1618.17 101.14 0.00 0.00 834782.03 460.01 1509949.44 00:07:00.448 00:07:00.448 real 0m7.962s 00:07:00.448 user 0m15.122s 00:07:00.448 sys 0m0.255s 00:07:00.448 23:12:23 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.448 ************************************ 00:07:00.448 END TEST bdev_verify_big_io 00:07:00.448 ************************************ 00:07:00.448 23:12:23 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:00.448 23:12:24 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:00.448 23:12:24 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:00.448 23:12:24 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.448 23:12:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:00.448 ************************************ 00:07:00.448 START TEST bdev_write_zeroes 00:07:00.448 ************************************ 00:07:00.448 23:12:24 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:00.448 [2024-11-17 23:12:24.116175] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:07:00.448 [2024-11-17 23:12:24.116316] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72191 ] 00:07:00.448 [2024-11-17 23:12:24.265536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.707 [2024-11-17 23:12:24.296113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.967 Running I/O for 1 seconds... 00:07:02.350 44544.00 IOPS, 174.00 MiB/s 00:07:02.350 Latency(us) 00:07:02.350 [2024-11-17T23:12:26.171Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:02.350 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:02.350 Nvme0n1 : 1.02 7449.89 29.10 0.00 0.00 17117.86 6604.01 31860.58 00:07:02.350 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:02.350 Nvme1n1 : 1.02 7441.19 29.07 0.00 0.00 17116.83 11998.13 26012.75 00:07:02.350 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:02.350 Nvme2n1 : 1.03 7484.81 29.24 0.00 0.00 16956.52 7259.37 22786.36 00:07:02.350 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:02.350 Nvme2n2 : 1.03 7476.31 29.20 0.00 0.00 16921.45 7511.43 22988.01 00:07:02.350 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:02.350 Nvme2n3 : 1.03 7467.84 29.17 0.00 0.00 16879.39 7259.37 23693.78 00:07:02.350 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:02.350 Nvme3n1 : 1.03 7459.16 29.14 0.00 0.00 16842.44 7662.67 23895.43 00:07:02.350 [2024-11-17T23:12:26.171Z] =================================================================================================================== 00:07:02.350 [2024-11-17T23:12:26.171Z] Total : 44779.20 174.92 0.00 0.00 16972.01 6604.01 31860.58 00:07:02.350 00:07:02.350 real 0m1.910s 00:07:02.350 user 0m1.594s 00:07:02.350 sys 0m0.197s 00:07:02.350 ************************************ 00:07:02.350 END TEST bdev_write_zeroes 00:07:02.350 ************************************ 00:07:02.350 23:12:25 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.350 23:12:25 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:02.350 23:12:26 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:02.350 23:12:26 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:02.350 23:12:26 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.350 23:12:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:02.350 ************************************ 00:07:02.350 START TEST bdev_json_nonenclosed 00:07:02.350 ************************************ 00:07:02.350 23:12:26 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:02.350 [2024-11-17 23:12:26.101076] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:07:02.350 [2024-11-17 23:12:26.101209] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72227 ] 00:07:02.611 [2024-11-17 23:12:26.249216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.611 [2024-11-17 23:12:26.278709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.611 [2024-11-17 23:12:26.278817] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:02.611 [2024-11-17 23:12:26.278834] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:02.611 [2024-11-17 23:12:26.278849] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:02.611 00:07:02.611 real 0m0.325s 00:07:02.611 user 0m0.127s 00:07:02.611 sys 0m0.093s 00:07:02.611 23:12:26 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.611 ************************************ 00:07:02.611 END TEST bdev_json_nonenclosed 00:07:02.611 ************************************ 00:07:02.611 23:12:26 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:02.611 23:12:26 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:02.611 23:12:26 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:02.611 23:12:26 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.611 23:12:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:02.611 ************************************ 00:07:02.611 START TEST bdev_json_nonarray 00:07:02.611 ************************************ 00:07:02.611 23:12:26 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:02.873 [2024-11-17 23:12:26.492591] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:07:02.873 [2024-11-17 23:12:26.492744] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72253 ] 00:07:02.873 [2024-11-17 23:12:26.641287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.873 [2024-11-17 23:12:26.670288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.873 [2024-11-17 23:12:26.670411] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:02.873 [2024-11-17 23:12:26.670431] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:02.873 [2024-11-17 23:12:26.670444] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:03.135 00:07:03.135 real 0m0.326s 00:07:03.135 user 0m0.133s 00:07:03.135 sys 0m0.089s 00:07:03.135 23:12:26 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.135 ************************************ 00:07:03.135 END TEST bdev_json_nonarray 00:07:03.135 ************************************ 00:07:03.135 23:12:26 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:03.135 23:12:26 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:07:03.135 23:12:26 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:07:03.135 23:12:26 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:07:03.135 23:12:26 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:03.135 23:12:26 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:07:03.135 23:12:26 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:03.135 23:12:26 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:03.135 23:12:26 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:03.135 23:12:26 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:03.135 23:12:26 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:03.135 23:12:26 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:03.135 00:07:03.135 real 0m31.630s 00:07:03.135 user 0m49.911s 00:07:03.135 sys 0m5.387s 00:07:03.135 ************************************ 00:07:03.135 END TEST blockdev_nvme 00:07:03.135 23:12:26 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.135 23:12:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:03.135 ************************************ 00:07:03.135 23:12:26 -- spdk/autotest.sh@209 -- # uname -s 00:07:03.135 23:12:26 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:03.135 23:12:26 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:03.135 23:12:26 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:03.135 23:12:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.135 23:12:26 -- common/autotest_common.sh@10 -- # set +x 00:07:03.135 ************************************ 00:07:03.135 START TEST blockdev_nvme_gpt 00:07:03.135 ************************************ 00:07:03.135 23:12:26 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:03.135 * Looking for test storage... 00:07:03.135 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:03.135 23:12:26 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:03.135 23:12:26 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:07:03.135 23:12:26 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:03.397 23:12:27 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:03.397 23:12:27 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:03.397 23:12:27 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:03.397 23:12:27 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:03.397 23:12:27 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.397 23:12:27 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:03.397 23:12:27 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:03.397 23:12:27 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:03.397 23:12:27 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:03.397 23:12:27 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:03.397 23:12:27 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:03.397 23:12:27 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:03.397 23:12:27 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:03.397 23:12:27 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:03.397 23:12:27 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:03.397 23:12:27 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.397 23:12:27 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:03.397 23:12:27 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:03.397 23:12:27 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.397 23:12:27 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:03.397 23:12:27 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:03.397 23:12:27 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:03.397 23:12:27 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:03.397 23:12:27 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.397 23:12:27 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:03.397 23:12:27 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:03.397 23:12:27 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:03.397 23:12:27 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:03.397 23:12:27 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:03.397 23:12:27 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.397 23:12:27 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:03.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.397 --rc genhtml_branch_coverage=1 00:07:03.397 --rc genhtml_function_coverage=1 00:07:03.397 --rc genhtml_legend=1 00:07:03.397 --rc geninfo_all_blocks=1 00:07:03.397 --rc geninfo_unexecuted_blocks=1 00:07:03.397 00:07:03.397 ' 00:07:03.397 23:12:27 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:03.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.397 --rc genhtml_branch_coverage=1 00:07:03.397 --rc genhtml_function_coverage=1 00:07:03.397 --rc genhtml_legend=1 00:07:03.397 --rc geninfo_all_blocks=1 00:07:03.397 --rc geninfo_unexecuted_blocks=1 00:07:03.397 00:07:03.397 ' 00:07:03.397 23:12:27 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:03.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.397 --rc genhtml_branch_coverage=1 00:07:03.397 --rc genhtml_function_coverage=1 00:07:03.397 --rc genhtml_legend=1 00:07:03.397 --rc geninfo_all_blocks=1 00:07:03.397 --rc geninfo_unexecuted_blocks=1 00:07:03.397 00:07:03.397 ' 00:07:03.397 23:12:27 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:03.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.397 --rc genhtml_branch_coverage=1 00:07:03.397 --rc genhtml_function_coverage=1 00:07:03.397 --rc genhtml_legend=1 00:07:03.397 --rc geninfo_all_blocks=1 00:07:03.397 --rc geninfo_unexecuted_blocks=1 00:07:03.397 00:07:03.397 ' 00:07:03.397 23:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:03.397 23:12:27 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:03.397 23:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:03.397 23:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:03.397 23:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:03.397 23:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:03.397 23:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:03.397 23:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:03.397 23:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:03.397 23:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:03.397 23:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:03.397 23:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:03.397 23:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:07:03.397 23:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:03.397 23:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:03.397 23:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:07:03.397 23:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:03.397 23:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:07:03.397 23:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:03.397 23:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:03.397 23:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:03.397 23:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:07:03.397 23:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:07:03.397 23:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:03.397 23:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72331 00:07:03.397 23:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:03.397 23:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 72331 00:07:03.397 23:12:27 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 72331 ']' 00:07:03.397 23:12:27 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.397 23:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:03.397 23:12:27 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.397 23:12:27 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.397 23:12:27 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.397 23:12:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:03.397 [2024-11-17 23:12:27.123735] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:07:03.397 [2024-11-17 23:12:27.123905] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72331 ] 00:07:03.659 [2024-11-17 23:12:27.272555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.659 [2024-11-17 23:12:27.301631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.233 23:12:27 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.233 23:12:27 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:07:04.233 23:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:04.233 23:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:07:04.233 23:12:27 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:04.492 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:04.751 Waiting for block devices as requested 00:07:04.751 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:04.751 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:05.009 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:05.009 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:10.273 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:10.273 23:12:33 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:10.273 23:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:10.273 23:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:10.273 23:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:07:10.273 23:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:10.273 23:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:07:10.273 23:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:10.273 23:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:10.273 23:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:10.273 23:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:10.273 23:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:07:10.273 23:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:10.273 23:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:10.273 23:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:10.273 23:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:10.273 23:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:07:10.273 23:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:07:10.273 23:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:10.273 23:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:10.273 23:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:10.273 23:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:07:10.273 23:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:07:10.273 23:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:10.273 23:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:10.273 23:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:10.273 23:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:07:10.273 23:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:07:10.273 23:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:10.273 23:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:10.273 23:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:10.273 23:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:07:10.273 23:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:07:10.273 23:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:10.273 23:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:10.273 23:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:10.273 23:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:07:10.273 23:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:07:10.273 23:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:10.273 23:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:10.273 23:12:33 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:10.273 23:12:33 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:10.273 23:12:33 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:10.273 23:12:33 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:10.273 23:12:33 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:10.273 23:12:33 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:10.273 23:12:33 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:10.273 23:12:33 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:10.273 BYT; 00:07:10.273 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:10.273 23:12:33 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:10.273 BYT; 00:07:10.273 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:10.273 23:12:33 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:10.273 23:12:33 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:10.273 23:12:33 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:10.273 23:12:33 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:10.273 23:12:33 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:10.273 23:12:33 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:10.273 23:12:33 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:10.273 23:12:33 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:10.273 23:12:33 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:10.273 23:12:33 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:10.273 23:12:33 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:10.273 23:12:33 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:10.273 23:12:33 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:10.273 23:12:33 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:10.273 23:12:33 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:10.273 23:12:33 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:10.273 23:12:33 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:10.273 23:12:33 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:10.273 23:12:33 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:10.273 23:12:33 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:10.273 23:12:33 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:10.273 23:12:33 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:10.273 23:12:33 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:10.273 23:12:33 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:10.273 23:12:33 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:10.273 23:12:33 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:10.273 23:12:33 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:10.273 23:12:33 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:10.273 23:12:33 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:11.219 The operation has completed successfully. 00:07:11.219 23:12:34 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:12.651 The operation has completed successfully. 00:07:12.651 23:12:36 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:12.909 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:13.475 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:13.475 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:13.475 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:13.475 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:13.475 23:12:37 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:13.475 23:12:37 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.475 23:12:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:13.475 [] 00:07:13.475 23:12:37 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.475 23:12:37 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:13.475 23:12:37 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:13.475 23:12:37 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:13.475 23:12:37 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:13.475 23:12:37 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:13.475 23:12:37 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.475 23:12:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:13.732 23:12:37 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.732 23:12:37 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:13.732 23:12:37 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.732 23:12:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:13.732 23:12:37 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.732 23:12:37 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:07:13.732 23:12:37 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:13.732 23:12:37 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.732 23:12:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:13.990 23:12:37 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.990 23:12:37 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:13.990 23:12:37 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.990 23:12:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:13.990 23:12:37 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.990 23:12:37 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:13.990 23:12:37 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.990 23:12:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:13.990 23:12:37 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.990 23:12:37 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:13.990 23:12:37 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:13.990 23:12:37 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:13.990 23:12:37 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.990 23:12:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:13.990 23:12:37 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.990 23:12:37 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:13.991 23:12:37 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "02f71137-7406-41c6-8682-44d457cbcfc3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "02f71137-7406-41c6-8682-44d457cbcfc3",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "de422b49-9470-4eea-b466-0926edeb7daa"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "de422b49-9470-4eea-b466-0926edeb7daa",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "f90168af-a24d-4123-a41e-45363ccea907"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f90168af-a24d-4123-a41e-45363ccea907",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "2830e660-00b4-4eef-b793-cdd755333de6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2830e660-00b4-4eef-b793-cdd755333de6",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "7959f0c1-07a4-407c-afd2-7a21a15eb4b5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "7959f0c1-07a4-407c-afd2-7a21a15eb4b5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:13.991 23:12:37 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:13.991 23:12:37 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:13.991 23:12:37 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:13.991 23:12:37 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:13.991 23:12:37 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 72331 00:07:13.991 23:12:37 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 72331 ']' 00:07:13.991 23:12:37 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 72331 00:07:13.991 23:12:37 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:07:13.991 23:12:37 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:13.991 23:12:37 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72331 00:07:13.991 23:12:37 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:13.991 killing process with pid 72331 00:07:13.991 23:12:37 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:13.991 23:12:37 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72331' 00:07:13.991 23:12:37 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 72331 00:07:13.991 23:12:37 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 72331 00:07:14.249 23:12:37 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:14.249 23:12:37 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:14.249 23:12:37 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:14.249 23:12:37 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.249 23:12:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:14.249 ************************************ 00:07:14.249 START TEST bdev_hello_world 00:07:14.249 ************************************ 00:07:14.249 23:12:37 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:14.249 [2024-11-17 23:12:38.006899] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:07:14.249 [2024-11-17 23:12:38.007046] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72944 ] 00:07:14.506 [2024-11-17 23:12:38.148744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.506 [2024-11-17 23:12:38.164950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.763 [2024-11-17 23:12:38.527622] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:14.763 [2024-11-17 23:12:38.527669] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:14.763 [2024-11-17 23:12:38.527689] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:14.763 [2024-11-17 23:12:38.529759] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:14.763 [2024-11-17 23:12:38.530552] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:14.763 [2024-11-17 23:12:38.530581] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:14.763 [2024-11-17 23:12:38.531137] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:14.763 00:07:14.763 [2024-11-17 23:12:38.531169] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:15.021 00:07:15.021 real 0m0.725s 00:07:15.021 user 0m0.489s 00:07:15.021 sys 0m0.132s 00:07:15.021 23:12:38 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.021 ************************************ 00:07:15.021 END TEST bdev_hello_world 00:07:15.021 ************************************ 00:07:15.021 23:12:38 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:15.021 23:12:38 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:15.021 23:12:38 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:15.021 23:12:38 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.021 23:12:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:15.021 ************************************ 00:07:15.021 START TEST bdev_bounds 00:07:15.021 ************************************ 00:07:15.021 23:12:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:15.021 23:12:38 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72975 00:07:15.021 23:12:38 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:15.021 Process bdevio pid: 72975 00:07:15.021 23:12:38 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72975' 00:07:15.021 23:12:38 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72975 00:07:15.021 23:12:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72975 ']' 00:07:15.021 23:12:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.021 23:12:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.021 23:12:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.021 23:12:38 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:15.021 23:12:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.021 23:12:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:15.021 [2024-11-17 23:12:38.795654] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:07:15.022 [2024-11-17 23:12:38.795773] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72975 ] 00:07:15.279 [2024-11-17 23:12:38.935536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:15.279 [2024-11-17 23:12:38.957116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.279 [2024-11-17 23:12:38.957398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.279 [2024-11-17 23:12:38.957427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.843 23:12:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.843 23:12:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:15.843 23:12:39 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:16.100 I/O targets: 00:07:16.100 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:16.101 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:16.101 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:16.101 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:16.101 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:16.101 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:16.101 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:16.101 00:07:16.101 00:07:16.101 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.101 http://cunit.sourceforge.net/ 00:07:16.101 00:07:16.101 00:07:16.101 Suite: bdevio tests on: Nvme3n1 00:07:16.101 Test: blockdev write read block ...passed 00:07:16.101 Test: blockdev write zeroes read block ...passed 00:07:16.101 Test: blockdev write zeroes read no split ...passed 00:07:16.101 Test: blockdev write zeroes read split ...passed 00:07:16.101 Test: blockdev write zeroes read split partial ...passed 00:07:16.101 Test: blockdev reset ...[2024-11-17 23:12:39.771585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:16.101 [2024-11-17 23:12:39.774688] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:16.101 passed 00:07:16.101 Test: blockdev write read 8 blocks ...passed 00:07:16.101 Test: blockdev write read size > 128k ...passed 00:07:16.101 Test: blockdev write read invalid size ...passed 00:07:16.101 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:16.101 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:16.101 Test: blockdev write read max offset ...passed 00:07:16.101 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:16.101 Test: blockdev writev readv 8 blocks ...passed 00:07:16.101 Test: blockdev writev readv 30 x 1block ...passed 00:07:16.101 Test: blockdev writev readv block ...passed 00:07:16.101 Test: blockdev writev readv size > 128k ...passed 00:07:16.101 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:16.101 Test: blockdev comparev and writev ...[2024-11-17 23:12:39.789528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c560a000 len:0x1000 00:07:16.101 [2024-11-17 23:12:39.789573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:16.101 passed 00:07:16.101 Test: blockdev nvme passthru rw ...passed 00:07:16.101 Test: blockdev nvme passthru vendor specific ...[2024-11-17 23:12:39.791484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:16.101 [2024-11-17 23:12:39.791522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:16.101 passed 00:07:16.101 Test: blockdev nvme admin passthru ...passed 00:07:16.101 Test: blockdev copy ...passed 00:07:16.101 Suite: bdevio tests on: Nvme2n3 00:07:16.101 Test: blockdev write read block ...passed 00:07:16.101 Test: blockdev write zeroes read block ...passed 00:07:16.101 Test: blockdev write zeroes read no split ...passed 00:07:16.101 Test: blockdev write zeroes read split ...passed 00:07:16.101 Test: blockdev write zeroes read split partial ...passed 00:07:16.101 Test: blockdev reset ...[2024-11-17 23:12:39.808679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:16.101 [2024-11-17 23:12:39.812280] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:16.101 passed 00:07:16.101 Test: blockdev write read 8 blocks ...passed 00:07:16.101 Test: blockdev write read size > 128k ...passed 00:07:16.101 Test: blockdev write read invalid size ...passed 00:07:16.101 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:16.101 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:16.101 Test: blockdev write read max offset ...passed 00:07:16.101 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:16.101 Test: blockdev writev readv 8 blocks ...passed 00:07:16.101 Test: blockdev writev readv 30 x 1block ...passed 00:07:16.101 Test: blockdev writev readv block ...passed 00:07:16.101 Test: blockdev writev readv size > 128k ...passed 00:07:16.101 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:16.101 Test: blockdev comparev and writev ...[2024-11-17 23:12:39.825118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b3c04000 len:0x1000 00:07:16.101 [2024-11-17 23:12:39.825163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:16.101 passed 00:07:16.101 Test: blockdev nvme passthru rw ...passed 00:07:16.101 Test: blockdev nvme passthru vendor specific ...passed 00:07:16.101 Test: blockdev nvme admin passthru ...[2024-11-17 23:12:39.827177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:16.101 [2024-11-17 23:12:39.827219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:16.101 passed 00:07:16.101 Test: blockdev copy ...passed 00:07:16.101 Suite: bdevio tests on: Nvme2n2 00:07:16.101 Test: blockdev write read block ...passed 00:07:16.101 Test: blockdev write zeroes read block ...passed 00:07:16.101 Test: blockdev write zeroes read no split ...passed 00:07:16.101 Test: blockdev write zeroes read split ...passed 00:07:16.101 Test: blockdev write zeroes read split partial ...passed 00:07:16.101 Test: blockdev reset ...[2024-11-17 23:12:39.846790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:16.101 [2024-11-17 23:12:39.849039] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:16.101 passed 00:07:16.101 Test: blockdev write read 8 blocks ...passed 00:07:16.101 Test: blockdev write read size > 128k ...passed 00:07:16.101 Test: blockdev write read invalid size ...passed 00:07:16.101 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:16.101 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:16.101 Test: blockdev write read max offset ...passed 00:07:16.101 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:16.101 Test: blockdev writev readv 8 blocks ...passed 00:07:16.101 Test: blockdev writev readv 30 x 1block ...passed 00:07:16.101 Test: blockdev writev readv block ...passed 00:07:16.101 Test: blockdev writev readv size > 128k ...passed 00:07:16.101 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:16.101 Test: blockdev comparev and writev ...[2024-11-17 23:12:39.862841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b3c04000 len:0x1000 00:07:16.101 [2024-11-17 23:12:39.862888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:16.101 passed 00:07:16.101 Test: blockdev nvme passthru rw ...passed 00:07:16.101 Test: blockdev nvme passthru vendor specific ...[2024-11-17 23:12:39.864952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:16.101 [2024-11-17 23:12:39.864977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:16.101 passed 00:07:16.101 Test: blockdev nvme admin passthru ...passed 00:07:16.101 Test: blockdev copy ...passed 00:07:16.101 Suite: bdevio tests on: Nvme2n1 00:07:16.101 Test: blockdev write read block ...passed 00:07:16.101 Test: blockdev write zeroes read block ...passed 00:07:16.101 Test: blockdev write zeroes read no split ...passed 00:07:16.101 Test: blockdev write zeroes read split ...passed 00:07:16.101 Test: blockdev write zeroes read split partial ...passed 00:07:16.101 Test: blockdev reset ...[2024-11-17 23:12:39.885764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:16.101 [2024-11-17 23:12:39.887777] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:16.101 passed 00:07:16.101 Test: blockdev write read 8 blocks ...passed 00:07:16.101 Test: blockdev write read size > 128k ...passed 00:07:16.101 Test: blockdev write read invalid size ...passed 00:07:16.101 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:16.101 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:16.101 Test: blockdev write read max offset ...passed 00:07:16.101 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:16.101 Test: blockdev writev readv 8 blocks ...passed 00:07:16.101 Test: blockdev writev readv 30 x 1block ...passed 00:07:16.101 Test: blockdev writev readv block ...passed 00:07:16.101 Test: blockdev writev readv size > 128k ...passed 00:07:16.101 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:16.101 Test: blockdev comparev and writev ...[2024-11-17 23:12:39.901699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b3c06000 len:0x1000 00:07:16.101 [2024-11-17 23:12:39.901737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:16.101 passed 00:07:16.101 Test: blockdev nvme passthru rw ...passed 00:07:16.101 Test: blockdev nvme passthru vendor specific ...[2024-11-17 23:12:39.903558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:16.101 [2024-11-17 23:12:39.903586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:16.101 passed 00:07:16.101 Test: blockdev nvme admin passthru ...passed 00:07:16.101 Test: blockdev copy ...passed 00:07:16.101 Suite: bdevio tests on: Nvme1n1p2 00:07:16.102 Test: blockdev write read block ...passed 00:07:16.102 Test: blockdev write zeroes read block ...passed 00:07:16.102 Test: blockdev write zeroes read no split ...passed 00:07:16.359 Test: blockdev write zeroes read split ...passed 00:07:16.359 Test: blockdev write zeroes read split partial ...passed 00:07:16.359 Test: blockdev reset ...[2024-11-17 23:12:39.924390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:16.359 [2024-11-17 23:12:39.926924] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:16.359 passed 00:07:16.359 Test: blockdev write read 8 blocks ...passed 00:07:16.359 Test: blockdev write read size > 128k ...passed 00:07:16.359 Test: blockdev write read invalid size ...passed 00:07:16.359 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:16.359 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:16.359 Test: blockdev write read max offset ...passed 00:07:16.359 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:16.359 Test: blockdev writev readv 8 blocks ...passed 00:07:16.359 Test: blockdev writev readv 30 x 1block ...passed 00:07:16.359 Test: blockdev writev readv block ...passed 00:07:16.360 Test: blockdev writev readv size > 128k ...passed 00:07:16.360 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:16.360 Test: blockdev comparev and writev ...[2024-11-17 23:12:39.941557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2b3c02000 len:0x1000 00:07:16.360 [2024-11-17 23:12:39.941593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:16.360 passed 00:07:16.360 Test: blockdev nvme passthru rw ...passed 00:07:16.360 Test: blockdev nvme passthru vendor specific ...passed 00:07:16.360 Test: blockdev nvme admin passthru ...passed 00:07:16.360 Test: blockdev copy ...passed 00:07:16.360 Suite: bdevio tests on: Nvme1n1p1 00:07:16.360 Test: blockdev write read block ...passed 00:07:16.360 Test: blockdev write zeroes read block ...passed 00:07:16.360 Test: blockdev write zeroes read no split ...passed 00:07:16.360 Test: blockdev write zeroes read split ...passed 00:07:16.360 Test: blockdev write zeroes read split partial ...passed 00:07:16.360 Test: blockdev reset ...[2024-11-17 23:12:39.960557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:16.360 [2024-11-17 23:12:39.962894] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:16.360 passed 00:07:16.360 Test: blockdev write read 8 blocks ...passed 00:07:16.360 Test: blockdev write read size > 128k ...passed 00:07:16.360 Test: blockdev write read invalid size ...passed 00:07:16.360 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:16.360 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:16.360 Test: blockdev write read max offset ...passed 00:07:16.360 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:16.360 Test: blockdev writev readv 8 blocks ...passed 00:07:16.360 Test: blockdev writev readv 30 x 1block ...passed 00:07:16.360 Test: blockdev writev readv block ...passed 00:07:16.360 Test: blockdev writev readv size > 128k ...passed 00:07:16.360 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:16.360 Test: blockdev comparev and writev ...[2024-11-17 23:12:39.977668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2cf83b000 len:0x1000 00:07:16.360 [2024-11-17 23:12:39.977706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:16.360 passed 00:07:16.360 Test: blockdev nvme passthru rw ...passed 00:07:16.360 Test: blockdev nvme passthru vendor specific ...passed 00:07:16.360 Test: blockdev nvme admin passthru ...passed 00:07:16.360 Test: blockdev copy ...passed 00:07:16.360 Suite: bdevio tests on: Nvme0n1 00:07:16.360 Test: blockdev write read block ...passed 00:07:16.360 Test: blockdev write zeroes read block ...passed 00:07:16.360 Test: blockdev write zeroes read no split ...passed 00:07:16.360 Test: blockdev write zeroes read split ...passed 00:07:16.360 Test: blockdev write zeroes read split partial ...passed 00:07:16.360 Test: blockdev reset ...[2024-11-17 23:12:39.997477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:16.360 [2024-11-17 23:12:39.999431] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:16.360 passed 00:07:16.360 Test: blockdev write read 8 blocks ...passed 00:07:16.360 Test: blockdev write read size > 128k ...passed 00:07:16.360 Test: blockdev write read invalid size ...passed 00:07:16.360 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:16.360 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:16.360 Test: blockdev write read max offset ...passed 00:07:16.360 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:16.360 Test: blockdev writev readv 8 blocks ...passed 00:07:16.360 Test: blockdev writev readv 30 x 1block ...passed 00:07:16.360 Test: blockdev writev readv block ...passed 00:07:16.360 Test: blockdev writev readv size > 128k ...passed 00:07:16.360 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:16.360 Test: blockdev comparev and writev ...passed 00:07:16.360 Test: blockdev nvme passthru rw ...[2024-11-17 23:12:40.010143] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:16.360 separate metadata which is not supported yet. 00:07:16.360 passed 00:07:16.360 Test: blockdev nvme passthru vendor specific ...[2024-11-17 23:12:40.011519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:16.360 [2024-11-17 23:12:40.011552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:16.360 passed 00:07:16.360 Test: blockdev nvme admin passthru ...passed 00:07:16.360 Test: blockdev copy ...passed 00:07:16.360 00:07:16.360 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.360 suites 7 7 n/a 0 0 00:07:16.360 tests 161 161 161 0 0 00:07:16.360 asserts 1025 1025 1025 0 n/a 00:07:16.360 00:07:16.360 Elapsed time = 0.583 seconds 00:07:16.360 0 00:07:16.360 23:12:40 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72975 00:07:16.360 23:12:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72975 ']' 00:07:16.360 23:12:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72975 00:07:16.360 23:12:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:16.360 23:12:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.360 23:12:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72975 00:07:16.360 23:12:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:16.360 23:12:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:16.360 killing process with pid 72975 00:07:16.360 23:12:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72975' 00:07:16.360 23:12:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72975 00:07:16.360 23:12:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72975 00:07:16.618 23:12:40 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:16.618 00:07:16.618 real 0m1.467s 00:07:16.619 user 0m3.787s 00:07:16.619 sys 0m0.257s 00:07:16.619 23:12:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.619 23:12:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:16.619 ************************************ 00:07:16.619 END TEST bdev_bounds 00:07:16.619 ************************************ 00:07:16.619 23:12:40 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:16.619 23:12:40 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:16.619 23:12:40 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.619 23:12:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:16.619 ************************************ 00:07:16.619 START TEST bdev_nbd 00:07:16.619 ************************************ 00:07:16.619 23:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:16.619 23:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:16.619 23:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:16.619 23:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.619 23:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:16.619 23:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:16.619 23:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:16.619 23:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:16.619 23:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:16.619 23:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:16.619 23:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:16.619 23:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:16.619 23:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:16.619 23:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:16.619 23:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:16.619 23:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:16.619 23:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73018 00:07:16.619 23:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:16.619 23:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73018 /var/tmp/spdk-nbd.sock 00:07:16.619 23:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 73018 ']' 00:07:16.619 23:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:16.619 23:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:16.619 23:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:16.619 23:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.619 23:12:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:16.619 23:12:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:16.619 [2024-11-17 23:12:40.328830] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:07:16.619 [2024-11-17 23:12:40.328962] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:16.876 [2024-11-17 23:12:40.474578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.876 [2024-11-17 23:12:40.493709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.444 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:17.444 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:17.444 23:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:17.444 23:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.444 23:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:17.444 23:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:17.445 23:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:17.445 23:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.445 23:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:17.445 23:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:17.445 23:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:17.445 23:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:17.445 23:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:17.445 23:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:17.445 23:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:17.707 23:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:17.707 23:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:17.707 23:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:17.707 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:17.707 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:17.707 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:17.707 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:17.707 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:17.707 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:17.707 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:17.707 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:17.707 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.707 1+0 records in 00:07:17.707 1+0 records out 00:07:17.707 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000840137 s, 4.9 MB/s 00:07:17.707 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.707 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:17.707 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.707 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:17.707 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:17.707 23:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:17.707 23:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:17.707 23:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:17.965 23:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:17.965 23:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:17.965 23:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:17.965 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:17.965 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:17.965 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:17.965 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:17.965 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:17.965 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:17.965 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:17.965 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:17.965 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.965 1+0 records in 00:07:17.965 1+0 records out 00:07:17.965 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281605 s, 14.5 MB/s 00:07:17.965 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.965 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:17.965 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.965 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:17.965 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:17.965 23:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:17.965 23:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:17.965 23:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:17.965 23:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:17.965 23:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:18.226 23:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:18.226 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:18.226 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:18.226 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:18.226 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:18.227 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:18.227 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:18.227 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:18.227 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:18.227 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.227 1+0 records in 00:07:18.227 1+0 records out 00:07:18.227 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000982396 s, 4.2 MB/s 00:07:18.227 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.227 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:18.227 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.227 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:18.227 23:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:18.227 23:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:18.227 23:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:18.227 23:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:18.227 23:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:18.227 23:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:18.227 23:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:18.227 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:18.227 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:18.227 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:18.227 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:18.227 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:18.227 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:18.227 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:18.227 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:18.227 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.227 1+0 records in 00:07:18.227 1+0 records out 00:07:18.227 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000568155 s, 7.2 MB/s 00:07:18.227 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.227 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:18.227 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.227 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:18.227 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:18.227 23:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:18.227 23:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:18.227 23:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:18.486 23:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:18.486 23:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:18.486 23:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:18.486 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:18.486 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:18.486 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:18.486 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:18.486 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:18.486 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:18.486 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:18.487 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:18.487 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.487 1+0 records in 00:07:18.487 1+0 records out 00:07:18.487 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000817753 s, 5.0 MB/s 00:07:18.487 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.487 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:18.487 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.487 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:18.487 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:18.487 23:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:18.487 23:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:18.487 23:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:18.744 23:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:18.744 23:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:18.744 23:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:18.744 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:18.744 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:18.744 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:18.744 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:18.744 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:18.744 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:18.744 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:18.744 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:18.744 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.744 1+0 records in 00:07:18.744 1+0 records out 00:07:18.744 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000980042 s, 4.2 MB/s 00:07:18.744 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.744 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:18.744 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.744 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:18.744 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:18.744 23:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:18.744 23:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:18.744 23:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:19.002 23:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:19.002 23:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:19.002 23:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:19.002 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:07:19.002 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:19.002 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:19.002 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:19.002 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:07:19.002 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:19.002 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:19.002 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:19.002 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:19.002 1+0 records in 00:07:19.002 1+0 records out 00:07:19.002 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000447606 s, 9.2 MB/s 00:07:19.002 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.002 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:19.002 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.002 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:19.002 23:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:19.002 23:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:19.003 23:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:19.003 23:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:19.263 23:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:19.263 { 00:07:19.263 "nbd_device": "/dev/nbd0", 00:07:19.263 "bdev_name": "Nvme0n1" 00:07:19.263 }, 00:07:19.263 { 00:07:19.263 "nbd_device": "/dev/nbd1", 00:07:19.263 "bdev_name": "Nvme1n1p1" 00:07:19.263 }, 00:07:19.263 { 00:07:19.263 "nbd_device": "/dev/nbd2", 00:07:19.263 "bdev_name": "Nvme1n1p2" 00:07:19.263 }, 00:07:19.263 { 00:07:19.263 "nbd_device": "/dev/nbd3", 00:07:19.263 "bdev_name": "Nvme2n1" 00:07:19.263 }, 00:07:19.263 { 00:07:19.263 "nbd_device": "/dev/nbd4", 00:07:19.263 "bdev_name": "Nvme2n2" 00:07:19.263 }, 00:07:19.263 { 00:07:19.263 "nbd_device": "/dev/nbd5", 00:07:19.263 "bdev_name": "Nvme2n3" 00:07:19.263 }, 00:07:19.263 { 00:07:19.263 "nbd_device": "/dev/nbd6", 00:07:19.263 "bdev_name": "Nvme3n1" 00:07:19.263 } 00:07:19.263 ]' 00:07:19.263 23:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:19.263 23:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:19.263 23:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:19.263 { 00:07:19.263 "nbd_device": "/dev/nbd0", 00:07:19.263 "bdev_name": "Nvme0n1" 00:07:19.263 }, 00:07:19.263 { 00:07:19.263 "nbd_device": "/dev/nbd1", 00:07:19.263 "bdev_name": "Nvme1n1p1" 00:07:19.263 }, 00:07:19.263 { 00:07:19.263 "nbd_device": "/dev/nbd2", 00:07:19.263 "bdev_name": "Nvme1n1p2" 00:07:19.263 }, 00:07:19.263 { 00:07:19.263 "nbd_device": "/dev/nbd3", 00:07:19.263 "bdev_name": "Nvme2n1" 00:07:19.263 }, 00:07:19.263 { 00:07:19.263 "nbd_device": "/dev/nbd4", 00:07:19.263 "bdev_name": "Nvme2n2" 00:07:19.263 }, 00:07:19.263 { 00:07:19.263 "nbd_device": "/dev/nbd5", 00:07:19.263 "bdev_name": "Nvme2n3" 00:07:19.263 }, 00:07:19.263 { 00:07:19.263 "nbd_device": "/dev/nbd6", 00:07:19.263 "bdev_name": "Nvme3n1" 00:07:19.263 } 00:07:19.263 ]' 00:07:19.263 23:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:19.263 23:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.263 23:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:19.263 23:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:19.263 23:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:19.263 23:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.263 23:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:19.525 23:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:19.525 23:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:19.525 23:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:19.525 23:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.525 23:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.525 23:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:19.525 23:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:19.525 23:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.525 23:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.525 23:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:19.785 23:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:19.785 23:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:19.785 23:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:19.785 23:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.785 23:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.785 23:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:19.785 23:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:19.785 23:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.785 23:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.785 23:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:19.785 23:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:19.785 23:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:19.785 23:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:19.785 23:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.785 23:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.786 23:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:19.786 23:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:19.786 23:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.786 23:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.786 23:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:20.045 23:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:20.045 23:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:20.045 23:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:20.045 23:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.045 23:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.045 23:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:20.045 23:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:20.045 23:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.045 23:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.045 23:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:20.306 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:20.306 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:20.306 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:20.306 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.306 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.306 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:20.306 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:20.306 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.306 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.306 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:20.567 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:20.567 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:20.567 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:20.567 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.567 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.567 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:20.567 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:20.567 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.567 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.567 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:20.829 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:20.829 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:20.829 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:20.829 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.829 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.829 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:20.829 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:20.829 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.829 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:20.829 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.829 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:21.091 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:21.091 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:21.091 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:21.091 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:21.091 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:21.091 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:21.091 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:21.091 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:21.091 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:21.091 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:21.091 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:21.091 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:21.091 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:21.091 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.091 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:21.091 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:21.091 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:21.091 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:21.091 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:21.092 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.092 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:21.092 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:21.092 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:21.092 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:21.092 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:21.092 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:21.092 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:21.092 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:21.352 /dev/nbd0 00:07:21.352 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:21.352 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:21.352 23:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:21.352 23:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:21.352 23:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:21.352 23:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:21.352 23:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:21.352 23:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:21.352 23:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:21.352 23:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:21.352 23:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:21.352 1+0 records in 00:07:21.352 1+0 records out 00:07:21.352 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000610181 s, 6.7 MB/s 00:07:21.352 23:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.352 23:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:21.352 23:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.352 23:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:21.352 23:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:21.353 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:21.353 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:21.353 23:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:21.613 /dev/nbd1 00:07:21.613 23:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:21.613 23:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:21.613 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:21.613 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:21.613 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:21.613 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:21.613 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:21.613 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:21.614 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:21.614 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:21.614 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:21.614 1+0 records in 00:07:21.614 1+0 records out 00:07:21.614 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377951 s, 10.8 MB/s 00:07:21.614 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.614 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:21.614 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.614 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:21.614 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:21.614 23:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:21.614 23:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:21.614 23:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:21.614 /dev/nbd10 00:07:21.614 23:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:21.614 23:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:21.614 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:21.614 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:21.614 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:21.614 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:21.614 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:21.614 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:21.614 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:21.614 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:21.614 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:21.614 1+0 records in 00:07:21.614 1+0 records out 00:07:21.614 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333979 s, 12.3 MB/s 00:07:21.614 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.874 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:21.874 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.874 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:21.874 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:21.874 23:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:21.874 23:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:21.874 23:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:21.874 /dev/nbd11 00:07:21.874 23:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:21.874 23:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:21.875 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:21.875 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:21.875 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:21.875 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:21.875 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:21.875 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:21.875 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:21.875 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:21.875 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:21.875 1+0 records in 00:07:21.875 1+0 records out 00:07:21.875 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000509766 s, 8.0 MB/s 00:07:21.875 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.875 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:21.875 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.875 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:21.875 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:21.875 23:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:21.875 23:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:21.875 23:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:22.137 /dev/nbd12 00:07:22.137 23:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:22.137 23:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:22.137 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:22.137 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:22.137 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:22.137 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:22.137 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:22.137 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:22.137 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:22.137 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:22.137 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:22.137 1+0 records in 00:07:22.137 1+0 records out 00:07:22.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000532441 s, 7.7 MB/s 00:07:22.137 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:22.137 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:22.137 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:22.137 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:22.137 23:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:22.137 23:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:22.137 23:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:22.137 23:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:22.398 /dev/nbd13 00:07:22.398 23:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:22.398 23:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:22.398 23:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:22.398 23:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:22.398 23:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:22.398 23:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:22.398 23:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:22.398 23:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:22.398 23:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:22.398 23:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:22.398 23:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:22.398 1+0 records in 00:07:22.398 1+0 records out 00:07:22.398 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000509649 s, 8.0 MB/s 00:07:22.398 23:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:22.398 23:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:22.398 23:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:22.398 23:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:22.399 23:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:22.399 23:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:22.399 23:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:22.399 23:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:22.657 /dev/nbd14 00:07:22.657 23:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:22.657 23:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:22.657 23:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:07:22.657 23:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:22.657 23:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:22.657 23:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:22.657 23:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:07:22.657 23:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:22.657 23:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:22.657 23:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:22.658 23:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:22.658 1+0 records in 00:07:22.658 1+0 records out 00:07:22.658 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054311 s, 7.5 MB/s 00:07:22.658 23:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:22.658 23:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:22.658 23:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:22.658 23:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:22.658 23:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:22.658 23:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:22.658 23:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:22.658 23:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:22.658 23:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.658 23:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:22.918 23:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:22.918 { 00:07:22.918 "nbd_device": "/dev/nbd0", 00:07:22.918 "bdev_name": "Nvme0n1" 00:07:22.918 }, 00:07:22.918 { 00:07:22.918 "nbd_device": "/dev/nbd1", 00:07:22.918 "bdev_name": "Nvme1n1p1" 00:07:22.918 }, 00:07:22.918 { 00:07:22.918 "nbd_device": "/dev/nbd10", 00:07:22.918 "bdev_name": "Nvme1n1p2" 00:07:22.918 }, 00:07:22.918 { 00:07:22.918 "nbd_device": "/dev/nbd11", 00:07:22.918 "bdev_name": "Nvme2n1" 00:07:22.918 }, 00:07:22.918 { 00:07:22.918 "nbd_device": "/dev/nbd12", 00:07:22.918 "bdev_name": "Nvme2n2" 00:07:22.918 }, 00:07:22.918 { 00:07:22.918 "nbd_device": "/dev/nbd13", 00:07:22.918 "bdev_name": "Nvme2n3" 00:07:22.918 }, 00:07:22.918 { 00:07:22.918 "nbd_device": "/dev/nbd14", 00:07:22.918 "bdev_name": "Nvme3n1" 00:07:22.918 } 00:07:22.918 ]' 00:07:22.918 23:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:22.918 { 00:07:22.918 "nbd_device": "/dev/nbd0", 00:07:22.918 "bdev_name": "Nvme0n1" 00:07:22.918 }, 00:07:22.918 { 00:07:22.918 "nbd_device": "/dev/nbd1", 00:07:22.918 "bdev_name": "Nvme1n1p1" 00:07:22.918 }, 00:07:22.918 { 00:07:22.918 "nbd_device": "/dev/nbd10", 00:07:22.918 "bdev_name": "Nvme1n1p2" 00:07:22.918 }, 00:07:22.918 { 00:07:22.918 "nbd_device": "/dev/nbd11", 00:07:22.918 "bdev_name": "Nvme2n1" 00:07:22.918 }, 00:07:22.918 { 00:07:22.918 "nbd_device": "/dev/nbd12", 00:07:22.918 "bdev_name": "Nvme2n2" 00:07:22.918 }, 00:07:22.918 { 00:07:22.918 "nbd_device": "/dev/nbd13", 00:07:22.918 "bdev_name": "Nvme2n3" 00:07:22.918 }, 00:07:22.918 { 00:07:22.918 "nbd_device": "/dev/nbd14", 00:07:22.918 "bdev_name": "Nvme3n1" 00:07:22.918 } 00:07:22.918 ]' 00:07:22.918 23:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:22.918 23:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:22.918 /dev/nbd1 00:07:22.918 /dev/nbd10 00:07:22.918 /dev/nbd11 00:07:22.918 /dev/nbd12 00:07:22.918 /dev/nbd13 00:07:22.918 /dev/nbd14' 00:07:22.918 23:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:22.918 23:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:22.918 /dev/nbd1 00:07:22.918 /dev/nbd10 00:07:22.918 /dev/nbd11 00:07:22.918 /dev/nbd12 00:07:22.918 /dev/nbd13 00:07:22.918 /dev/nbd14' 00:07:22.918 23:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:22.918 23:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:22.918 23:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:22.918 23:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:22.918 23:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:22.918 23:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:22.918 23:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:22.918 23:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:22.919 23:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:22.919 23:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:22.919 23:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:22.919 256+0 records in 00:07:22.919 256+0 records out 00:07:22.919 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122464 s, 85.6 MB/s 00:07:22.919 23:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:22.919 23:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:22.919 256+0 records in 00:07:22.919 256+0 records out 00:07:22.919 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0598607 s, 17.5 MB/s 00:07:22.919 23:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:22.919 23:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:23.180 256+0 records in 00:07:23.180 256+0 records out 00:07:23.180 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0632566 s, 16.6 MB/s 00:07:23.180 23:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:23.180 23:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:23.180 256+0 records in 00:07:23.180 256+0 records out 00:07:23.180 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0562239 s, 18.7 MB/s 00:07:23.180 23:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:23.180 23:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:23.180 256+0 records in 00:07:23.180 256+0 records out 00:07:23.180 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0556457 s, 18.8 MB/s 00:07:23.180 23:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:23.180 23:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:23.180 256+0 records in 00:07:23.180 256+0 records out 00:07:23.180 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0564407 s, 18.6 MB/s 00:07:23.180 23:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:23.180 23:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:23.180 256+0 records in 00:07:23.180 256+0 records out 00:07:23.180 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0616167 s, 17.0 MB/s 00:07:23.180 23:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:23.180 23:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:23.441 256+0 records in 00:07:23.441 256+0 records out 00:07:23.441 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.055489 s, 18.9 MB/s 00:07:23.441 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:23.441 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:23.441 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:23.441 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:23.441 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:23.441 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:23.441 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:23.441 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:23.441 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:23.441 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:23.441 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:23.441 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:23.441 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:23.441 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:23.441 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:23.441 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:23.441 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:23.441 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:23.441 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:23.441 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:23.441 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:23.441 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:23.441 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:23.441 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.441 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:23.441 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:23.441 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:23.441 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.441 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:23.701 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:23.701 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:23.701 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:23.701 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.701 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.701 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:23.701 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:23.701 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.701 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.701 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:23.961 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:23.962 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:23.962 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:23.962 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.962 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.962 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:23.962 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:23.962 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.962 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.962 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:23.962 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:23.962 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:23.962 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:23.962 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.962 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.962 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:23.962 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:23.962 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.962 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.962 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:24.221 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:24.221 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:24.221 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:24.221 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:24.221 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:24.221 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:24.221 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:24.221 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:24.221 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:24.221 23:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:24.482 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:24.482 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:24.482 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:24.482 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:24.482 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:24.482 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:24.482 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:24.482 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:24.482 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:24.482 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:24.743 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:24.743 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:24.743 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:24.743 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:24.743 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:24.743 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:24.743 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:24.743 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:24.743 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:24.744 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:25.004 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:25.004 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:25.004 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:25.004 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:25.004 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:25.004 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:25.004 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:25.004 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:25.004 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:25.004 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.004 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:25.004 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:25.004 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:25.004 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:25.004 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:25.004 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:25.004 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:25.004 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:25.004 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:25.004 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:25.004 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:25.004 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:25.004 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:25.004 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:25.004 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.004 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:25.004 23:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:25.262 malloc_lvol_verify 00:07:25.262 23:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:25.527 6f21b661-2e66-4278-9125-f62060f942ed 00:07:25.527 23:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:25.788 75cf6194-01bb-4acd-9466-7d296f6a0f51 00:07:25.788 23:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:26.045 /dev/nbd0 00:07:26.045 23:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:26.045 23:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:26.045 23:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:26.045 23:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:26.045 23:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:26.045 mke2fs 1.47.0 (5-Feb-2023) 00:07:26.045 Discarding device blocks: 0/4096 done 00:07:26.046 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:26.046 00:07:26.046 Allocating group tables: 0/1 done 00:07:26.046 Writing inode tables: 0/1 done 00:07:26.046 Creating journal (1024 blocks): done 00:07:26.046 Writing superblocks and filesystem accounting information: 0/1 done 00:07:26.046 00:07:26.046 23:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:26.046 23:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.046 23:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:26.046 23:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:26.046 23:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:26.046 23:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:26.046 23:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:26.046 23:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:26.306 23:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:26.306 23:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:26.306 23:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:26.306 23:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:26.306 23:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:26.306 23:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:26.306 23:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:26.306 23:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73018 00:07:26.306 23:12:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 73018 ']' 00:07:26.306 23:12:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 73018 00:07:26.306 23:12:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:26.306 23:12:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:26.306 23:12:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73018 00:07:26.306 23:12:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:26.306 23:12:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:26.306 killing process with pid 73018 00:07:26.306 23:12:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73018' 00:07:26.306 23:12:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 73018 00:07:26.306 23:12:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 73018 00:07:26.306 23:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:26.307 00:07:26.307 real 0m9.793s 00:07:26.307 user 0m14.412s 00:07:26.307 sys 0m3.310s 00:07:26.307 23:12:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.307 23:12:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:26.307 ************************************ 00:07:26.307 END TEST bdev_nbd 00:07:26.307 ************************************ 00:07:26.307 23:12:50 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:26.307 23:12:50 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:26.307 skipping fio tests on NVMe due to multi-ns failures. 00:07:26.307 23:12:50 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:26.307 23:12:50 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:26.307 23:12:50 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:26.307 23:12:50 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:26.307 23:12:50 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:26.307 23:12:50 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.307 23:12:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:26.307 ************************************ 00:07:26.307 START TEST bdev_verify 00:07:26.307 ************************************ 00:07:26.307 23:12:50 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:26.566 [2024-11-17 23:12:50.167872] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:07:26.566 [2024-11-17 23:12:50.167990] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73426 ] 00:07:26.566 [2024-11-17 23:12:50.309136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:26.566 [2024-11-17 23:12:50.336315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.566 [2024-11-17 23:12:50.336369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.138 Running I/O for 5 seconds... 00:07:29.461 23680.00 IOPS, 92.50 MiB/s [2024-11-17T23:12:54.224Z] 24032.00 IOPS, 93.88 MiB/s [2024-11-17T23:12:55.164Z] 23104.00 IOPS, 90.25 MiB/s [2024-11-17T23:12:56.106Z] 22224.00 IOPS, 86.81 MiB/s [2024-11-17T23:12:56.106Z] 21606.40 IOPS, 84.40 MiB/s 00:07:32.285 Latency(us) 00:07:32.285 [2024-11-17T23:12:56.106Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:32.285 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:32.285 Verification LBA range: start 0x0 length 0xbd0bd 00:07:32.285 Nvme0n1 : 5.05 1494.59 5.84 0.00 0.00 85432.26 16232.76 83886.08 00:07:32.285 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:32.285 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:32.285 Nvme0n1 : 5.05 1544.72 6.03 0.00 0.00 82557.02 13611.32 75820.11 00:07:32.285 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:32.285 Verification LBA range: start 0x0 length 0x4ff80 00:07:32.285 Nvme1n1p1 : 5.05 1494.06 5.84 0.00 0.00 85349.31 17442.66 83079.48 00:07:32.285 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:32.285 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:32.285 Nvme1n1p1 : 5.06 1543.19 6.03 0.00 0.00 82385.74 14518.74 68157.44 00:07:32.285 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:32.285 Verification LBA range: start 0x0 length 0x4ff7f 00:07:32.285 Nvme1n1p2 : 5.06 1492.59 5.83 0.00 0.00 85268.78 18350.08 78643.20 00:07:32.285 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:32.285 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:32.285 Nvme1n1p2 : 5.08 1549.25 6.05 0.00 0.00 81962.74 8418.86 66544.25 00:07:32.285 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:32.285 Verification LBA range: start 0x0 length 0x80000 00:07:32.285 Nvme2n1 : 5.06 1491.17 5.82 0.00 0.00 85205.77 20164.92 76223.41 00:07:32.285 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:32.285 Verification LBA range: start 0x80000 length 0x80000 00:07:32.285 Nvme2n1 : 5.08 1548.83 6.05 0.00 0.00 81831.23 8217.21 68157.44 00:07:32.285 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:32.285 Verification LBA range: start 0x0 length 0x80000 00:07:32.285 Nvme2n2 : 5.07 1490.71 5.82 0.00 0.00 85083.25 20467.40 78643.20 00:07:32.285 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:32.285 Verification LBA range: start 0x80000 length 0x80000 00:07:32.285 Nvme2n2 : 5.08 1548.42 6.05 0.00 0.00 81700.09 7864.32 70577.23 00:07:32.285 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:32.285 Verification LBA range: start 0x0 length 0x80000 00:07:32.285 Nvme2n3 : 5.07 1490.29 5.82 0.00 0.00 84967.71 16837.71 81869.59 00:07:32.285 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:32.285 Verification LBA range: start 0x80000 length 0x80000 00:07:32.285 Nvme2n3 : 5.09 1557.78 6.09 0.00 0.00 81207.46 8065.97 71383.83 00:07:32.285 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:32.285 Verification LBA range: start 0x0 length 0x20000 00:07:32.285 Nvme3n1 : 5.09 1496.23 5.84 0.00 0.00 84464.33 9981.64 84692.68 00:07:32.285 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:32.285 Verification LBA range: start 0x20000 length 0x20000 00:07:32.285 Nvme3n1 : 5.10 1557.36 6.08 0.00 0.00 81149.60 8318.03 71383.83 00:07:32.285 [2024-11-17T23:12:56.106Z] =================================================================================================================== 00:07:32.285 [2024-11-17T23:12:56.106Z] Total : 21299.19 83.20 0.00 0.00 83433.98 7864.32 84692.68 00:07:33.668 00:07:33.668 real 0m6.966s 00:07:33.668 user 0m13.185s 00:07:33.668 sys 0m0.219s 00:07:33.668 23:12:57 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.668 ************************************ 00:07:33.668 END TEST bdev_verify 00:07:33.668 ************************************ 00:07:33.668 23:12:57 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:33.668 23:12:57 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:33.668 23:12:57 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:33.668 23:12:57 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.668 23:12:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:33.668 ************************************ 00:07:33.668 START TEST bdev_verify_big_io 00:07:33.668 ************************************ 00:07:33.668 23:12:57 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:33.668 [2024-11-17 23:12:57.210294] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:07:33.668 [2024-11-17 23:12:57.210443] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73519 ] 00:07:33.668 [2024-11-17 23:12:57.358587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:33.668 [2024-11-17 23:12:57.389699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.668 [2024-11-17 23:12:57.389790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.241 Running I/O for 5 seconds... 00:07:38.169 682.00 IOPS, 42.62 MiB/s [2024-11-17T23:13:02.930Z] 1510.00 IOPS, 94.38 MiB/s [2024-11-17T23:13:03.878Z] 2044.00 IOPS, 127.75 MiB/s [2024-11-17T23:13:04.139Z] 2525.75 IOPS, 157.86 MiB/s 00:07:40.318 Latency(us) 00:07:40.318 [2024-11-17T23:13:04.139Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:40.318 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:40.318 Verification LBA range: start 0x0 length 0xbd0b 00:07:40.318 Nvme0n1 : 5.78 115.33 7.21 0.00 0.00 1072273.87 32465.53 1568024.42 00:07:40.318 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:40.318 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:40.318 Nvme0n1 : 5.70 127.55 7.97 0.00 0.00 974313.33 30449.03 1045349.61 00:07:40.318 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:40.318 Verification LBA range: start 0x0 length 0x4ff8 00:07:40.318 Nvme1n1p1 : 5.78 114.26 7.14 0.00 0.00 1047150.73 52025.50 1593835.52 00:07:40.318 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:40.318 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:40.318 Nvme1n1p1 : 5.78 116.21 7.26 0.00 0.00 1034413.14 79046.50 1387346.71 00:07:40.318 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:40.318 Verification LBA range: start 0x0 length 0x4ff7 00:07:40.318 Nvme1n1p2 : 5.87 117.75 7.36 0.00 0.00 988877.03 68560.74 1613193.85 00:07:40.318 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:40.318 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:40.318 Nvme1n1p2 : 5.78 118.59 7.41 0.00 0.00 982967.98 70980.53 1677721.60 00:07:40.318 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:40.318 Verification LBA range: start 0x0 length 0x8000 00:07:40.318 Nvme2n1 : 5.79 118.13 7.38 0.00 0.00 966130.06 68560.74 1632552.17 00:07:40.318 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:40.318 Verification LBA range: start 0x8000 length 0x8000 00:07:40.318 Nvme2n1 : 5.79 131.82 8.24 0.00 0.00 872949.83 71787.13 942105.21 00:07:40.318 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:40.318 Verification LBA range: start 0x0 length 0x8000 00:07:40.318 Nvme2n2 : 5.87 121.81 7.61 0.00 0.00 908358.15 69367.34 1664816.05 00:07:40.318 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:40.318 Verification LBA range: start 0x8000 length 0x8000 00:07:40.318 Nvme2n2 : 5.87 134.82 8.43 0.00 0.00 827287.36 84289.38 903388.55 00:07:40.318 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:40.318 Verification LBA range: start 0x0 length 0x8000 00:07:40.318 Nvme2n3 : 5.96 137.34 8.58 0.00 0.00 789101.11 10586.58 1677721.60 00:07:40.318 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:40.318 Verification LBA range: start 0x8000 length 0x8000 00:07:40.318 Nvme2n3 : 5.93 147.20 9.20 0.00 0.00 746090.55 21576.47 929199.66 00:07:40.318 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:40.318 Verification LBA range: start 0x0 length 0x2000 00:07:40.318 Nvme3n1 : 6.00 174.55 10.91 0.00 0.00 606024.12 1714.02 1297007.85 00:07:40.318 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:40.318 Verification LBA range: start 0x2000 length 0x2000 00:07:40.318 Nvme3n1 : 5.93 154.86 9.68 0.00 0.00 690180.47 1291.82 948557.98 00:07:40.318 [2024-11-17T23:13:04.139Z] =================================================================================================================== 00:07:40.318 [2024-11-17T23:13:04.139Z] Total : 1830.24 114.39 0.00 0.00 874043.21 1291.82 1677721.60 00:07:41.260 00:07:41.260 real 0m7.674s 00:07:41.260 user 0m14.519s 00:07:41.260 sys 0m0.287s 00:07:41.260 23:13:04 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.260 ************************************ 00:07:41.260 END TEST bdev_verify_big_io 00:07:41.260 ************************************ 00:07:41.260 23:13:04 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:41.260 23:13:04 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:41.260 23:13:04 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:41.260 23:13:04 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.260 23:13:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:41.260 ************************************ 00:07:41.260 START TEST bdev_write_zeroes 00:07:41.260 ************************************ 00:07:41.260 23:13:04 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:41.260 [2024-11-17 23:13:04.954561] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:07:41.260 [2024-11-17 23:13:04.954700] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73617 ] 00:07:41.519 [2024-11-17 23:13:05.102384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.519 [2024-11-17 23:13:05.135461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.779 Running I/O for 1 seconds... 00:07:43.162 49664.00 IOPS, 194.00 MiB/s 00:07:43.162 Latency(us) 00:07:43.162 [2024-11-17T23:13:06.983Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:43.162 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:43.162 Nvme0n1 : 1.03 7109.83 27.77 0.00 0.00 17924.64 13712.15 34280.37 00:07:43.162 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:43.162 Nvme1n1p1 : 1.03 7100.48 27.74 0.00 0.00 17919.48 14115.45 34885.32 00:07:43.162 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:43.162 Nvme1n1p2 : 1.03 7140.63 27.89 0.00 0.00 17725.29 9275.86 32062.23 00:07:43.162 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:43.162 Nvme2n1 : 1.03 7132.38 27.86 0.00 0.00 17674.57 9578.34 28432.54 00:07:43.162 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:43.162 Nvme2n2 : 1.03 7124.13 27.83 0.00 0.00 17626.74 9427.10 26012.75 00:07:43.162 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:43.162 Nvme2n3 : 1.03 7116.03 27.80 0.00 0.00 17580.15 9679.16 26214.40 00:07:43.162 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:43.162 Nvme3n1 : 1.04 7046.07 27.52 0.00 0.00 17703.63 9981.64 31053.98 00:07:43.162 [2024-11-17T23:13:06.983Z] =================================================================================================================== 00:07:43.162 [2024-11-17T23:13:06.983Z] Total : 49769.54 194.41 0.00 0.00 17735.93 9275.86 34885.32 00:07:43.162 00:07:43.162 real 0m2.015s 00:07:43.162 user 0m1.680s 00:07:43.162 sys 0m0.218s 00:07:43.162 23:13:06 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.162 ************************************ 00:07:43.162 END TEST bdev_write_zeroes 00:07:43.162 ************************************ 00:07:43.162 23:13:06 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:43.162 23:13:06 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:43.162 23:13:06 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:43.162 23:13:06 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.162 23:13:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:43.162 ************************************ 00:07:43.162 START TEST bdev_json_nonenclosed 00:07:43.162 ************************************ 00:07:43.162 23:13:06 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:43.421 [2024-11-17 23:13:07.053936] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:07:43.421 [2024-11-17 23:13:07.054123] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73659 ] 00:07:43.421 [2024-11-17 23:13:07.203259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.421 [2024-11-17 23:13:07.229574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.421 [2024-11-17 23:13:07.229671] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:43.421 [2024-11-17 23:13:07.229688] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:43.421 [2024-11-17 23:13:07.229700] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:43.680 00:07:43.680 real 0m0.337s 00:07:43.680 user 0m0.129s 00:07:43.680 sys 0m0.104s 00:07:43.680 23:13:07 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.680 ************************************ 00:07:43.680 END TEST bdev_json_nonenclosed 00:07:43.680 ************************************ 00:07:43.680 23:13:07 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:43.680 23:13:07 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:43.680 23:13:07 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:43.680 23:13:07 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.680 23:13:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:43.680 ************************************ 00:07:43.680 START TEST bdev_json_nonarray 00:07:43.680 ************************************ 00:07:43.680 23:13:07 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:43.680 [2024-11-17 23:13:07.443508] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:07:43.680 [2024-11-17 23:13:07.443653] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73679 ] 00:07:43.939 [2024-11-17 23:13:07.590841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.939 [2024-11-17 23:13:07.617668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.939 [2024-11-17 23:13:07.617771] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:43.940 [2024-11-17 23:13:07.617790] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:43.940 [2024-11-17 23:13:07.617802] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:43.940 00:07:43.940 real 0m0.314s 00:07:43.940 user 0m0.125s 00:07:43.940 sys 0m0.085s 00:07:43.940 23:13:07 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.940 ************************************ 00:07:43.940 END TEST bdev_json_nonarray 00:07:43.940 ************************************ 00:07:43.940 23:13:07 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:43.940 23:13:07 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:07:43.940 23:13:07 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:07:43.940 23:13:07 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:43.940 23:13:07 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:43.940 23:13:07 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.940 23:13:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:43.940 ************************************ 00:07:43.940 START TEST bdev_gpt_uuid 00:07:43.940 ************************************ 00:07:43.940 23:13:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:43.940 23:13:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:07:43.940 23:13:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:07:44.206 23:13:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73705 00:07:44.206 23:13:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:44.206 23:13:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 73705 00:07:44.206 23:13:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 73705 ']' 00:07:44.206 23:13:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:44.206 23:13:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.206 23:13:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:44.206 23:13:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.206 23:13:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:44.206 23:13:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:44.206 [2024-11-17 23:13:07.830531] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:07:44.206 [2024-11-17 23:13:07.830654] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73705 ] 00:07:44.206 [2024-11-17 23:13:07.974093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.206 [2024-11-17 23:13:08.006834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.177 23:13:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:45.177 23:13:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:45.177 23:13:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:45.177 23:13:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.177 23:13:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:45.438 Some configs were skipped because the RPC state that can call them passed over. 00:07:45.438 23:13:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.438 23:13:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:07:45.438 23:13:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.438 23:13:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:45.439 23:13:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.439 23:13:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:45.439 23:13:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.439 23:13:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:45.439 23:13:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.439 23:13:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:07:45.439 { 00:07:45.439 "name": "Nvme1n1p1", 00:07:45.439 "aliases": [ 00:07:45.439 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:45.439 ], 00:07:45.439 "product_name": "GPT Disk", 00:07:45.439 "block_size": 4096, 00:07:45.439 "num_blocks": 655104, 00:07:45.439 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:45.439 "assigned_rate_limits": { 00:07:45.439 "rw_ios_per_sec": 0, 00:07:45.439 "rw_mbytes_per_sec": 0, 00:07:45.439 "r_mbytes_per_sec": 0, 00:07:45.439 "w_mbytes_per_sec": 0 00:07:45.439 }, 00:07:45.439 "claimed": false, 00:07:45.439 "zoned": false, 00:07:45.439 "supported_io_types": { 00:07:45.439 "read": true, 00:07:45.439 "write": true, 00:07:45.439 "unmap": true, 00:07:45.439 "flush": true, 00:07:45.439 "reset": true, 00:07:45.439 "nvme_admin": false, 00:07:45.439 "nvme_io": false, 00:07:45.439 "nvme_io_md": false, 00:07:45.439 "write_zeroes": true, 00:07:45.439 "zcopy": false, 00:07:45.439 "get_zone_info": false, 00:07:45.439 "zone_management": false, 00:07:45.439 "zone_append": false, 00:07:45.439 "compare": true, 00:07:45.439 "compare_and_write": false, 00:07:45.439 "abort": true, 00:07:45.439 "seek_hole": false, 00:07:45.439 "seek_data": false, 00:07:45.439 "copy": true, 00:07:45.439 "nvme_iov_md": false 00:07:45.439 }, 00:07:45.439 "driver_specific": { 00:07:45.439 "gpt": { 00:07:45.439 "base_bdev": "Nvme1n1", 00:07:45.439 "offset_blocks": 256, 00:07:45.439 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:45.439 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:45.439 "partition_name": "SPDK_TEST_first" 00:07:45.439 } 00:07:45.439 } 00:07:45.439 } 00:07:45.439 ]' 00:07:45.439 23:13:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:45.439 23:13:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:45.439 23:13:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:45.439 23:13:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:45.439 23:13:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:45.439 23:13:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:45.439 23:13:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:45.439 23:13:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.439 23:13:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:45.439 23:13:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.439 23:13:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:45.439 { 00:07:45.439 "name": "Nvme1n1p2", 00:07:45.439 "aliases": [ 00:07:45.439 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:45.439 ], 00:07:45.439 "product_name": "GPT Disk", 00:07:45.439 "block_size": 4096, 00:07:45.439 "num_blocks": 655103, 00:07:45.439 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:45.439 "assigned_rate_limits": { 00:07:45.439 "rw_ios_per_sec": 0, 00:07:45.439 "rw_mbytes_per_sec": 0, 00:07:45.439 "r_mbytes_per_sec": 0, 00:07:45.439 "w_mbytes_per_sec": 0 00:07:45.439 }, 00:07:45.439 "claimed": false, 00:07:45.439 "zoned": false, 00:07:45.439 "supported_io_types": { 00:07:45.439 "read": true, 00:07:45.439 "write": true, 00:07:45.439 "unmap": true, 00:07:45.439 "flush": true, 00:07:45.439 "reset": true, 00:07:45.439 "nvme_admin": false, 00:07:45.439 "nvme_io": false, 00:07:45.439 "nvme_io_md": false, 00:07:45.439 "write_zeroes": true, 00:07:45.439 "zcopy": false, 00:07:45.439 "get_zone_info": false, 00:07:45.439 "zone_management": false, 00:07:45.439 "zone_append": false, 00:07:45.439 "compare": true, 00:07:45.439 "compare_and_write": false, 00:07:45.439 "abort": true, 00:07:45.439 "seek_hole": false, 00:07:45.439 "seek_data": false, 00:07:45.439 "copy": true, 00:07:45.439 "nvme_iov_md": false 00:07:45.439 }, 00:07:45.439 "driver_specific": { 00:07:45.439 "gpt": { 00:07:45.439 "base_bdev": "Nvme1n1", 00:07:45.439 "offset_blocks": 655360, 00:07:45.439 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:45.439 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:45.439 "partition_name": "SPDK_TEST_second" 00:07:45.439 } 00:07:45.439 } 00:07:45.439 } 00:07:45.439 ]' 00:07:45.439 23:13:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:45.439 23:13:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:45.439 23:13:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:45.439 23:13:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:45.439 23:13:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:45.698 23:13:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:45.698 23:13:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 73705 00:07:45.698 23:13:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 73705 ']' 00:07:45.698 23:13:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 73705 00:07:45.698 23:13:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:45.699 23:13:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:45.699 23:13:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73705 00:07:45.699 23:13:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:45.699 killing process with pid 73705 00:07:45.699 23:13:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:45.699 23:13:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73705' 00:07:45.699 23:13:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 73705 00:07:45.699 23:13:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 73705 00:07:45.959 00:07:45.959 real 0m1.863s 00:07:45.959 user 0m2.014s 00:07:45.959 sys 0m0.403s 00:07:45.959 23:13:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.959 23:13:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:45.959 ************************************ 00:07:45.959 END TEST bdev_gpt_uuid 00:07:45.959 ************************************ 00:07:45.959 23:13:09 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:07:45.959 23:13:09 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:45.959 23:13:09 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:07:45.959 23:13:09 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:45.959 23:13:09 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:45.959 23:13:09 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:45.959 23:13:09 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:45.959 23:13:09 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:45.959 23:13:09 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:46.220 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:46.481 Waiting for block devices as requested 00:07:46.481 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:46.481 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:46.742 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:46.742 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:52.029 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:52.029 23:13:15 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:52.029 23:13:15 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:52.289 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:52.289 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:52.289 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:52.289 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:52.289 23:13:15 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:52.289 00:07:52.289 real 0m49.000s 00:07:52.289 user 1m2.317s 00:07:52.289 sys 0m7.639s 00:07:52.289 23:13:15 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.289 ************************************ 00:07:52.289 END TEST blockdev_nvme_gpt 00:07:52.289 ************************************ 00:07:52.289 23:13:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:52.290 23:13:15 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:52.290 23:13:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.290 23:13:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.290 23:13:15 -- common/autotest_common.sh@10 -- # set +x 00:07:52.290 ************************************ 00:07:52.290 START TEST nvme 00:07:52.290 ************************************ 00:07:52.290 23:13:15 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:52.290 * Looking for test storage... 00:07:52.290 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:52.290 23:13:16 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:52.290 23:13:16 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:52.290 23:13:16 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:07:52.290 23:13:16 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:52.290 23:13:16 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:52.290 23:13:16 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:52.290 23:13:16 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:52.290 23:13:16 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:52.290 23:13:16 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:52.290 23:13:16 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:52.290 23:13:16 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:52.290 23:13:16 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:52.290 23:13:16 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:52.290 23:13:16 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:52.290 23:13:16 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:52.290 23:13:16 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:52.290 23:13:16 nvme -- scripts/common.sh@345 -- # : 1 00:07:52.290 23:13:16 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:52.290 23:13:16 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:52.290 23:13:16 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:52.290 23:13:16 nvme -- scripts/common.sh@353 -- # local d=1 00:07:52.290 23:13:16 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:52.290 23:13:16 nvme -- scripts/common.sh@355 -- # echo 1 00:07:52.290 23:13:16 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:52.290 23:13:16 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:52.290 23:13:16 nvme -- scripts/common.sh@353 -- # local d=2 00:07:52.290 23:13:16 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:52.290 23:13:16 nvme -- scripts/common.sh@355 -- # echo 2 00:07:52.290 23:13:16 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:52.290 23:13:16 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:52.290 23:13:16 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:52.290 23:13:16 nvme -- scripts/common.sh@368 -- # return 0 00:07:52.290 23:13:16 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:52.290 23:13:16 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:52.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.290 --rc genhtml_branch_coverage=1 00:07:52.290 --rc genhtml_function_coverage=1 00:07:52.290 --rc genhtml_legend=1 00:07:52.290 --rc geninfo_all_blocks=1 00:07:52.290 --rc geninfo_unexecuted_blocks=1 00:07:52.290 00:07:52.290 ' 00:07:52.290 23:13:16 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:52.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.290 --rc genhtml_branch_coverage=1 00:07:52.290 --rc genhtml_function_coverage=1 00:07:52.290 --rc genhtml_legend=1 00:07:52.290 --rc geninfo_all_blocks=1 00:07:52.290 --rc geninfo_unexecuted_blocks=1 00:07:52.290 00:07:52.290 ' 00:07:52.290 23:13:16 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:52.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.290 --rc genhtml_branch_coverage=1 00:07:52.290 --rc genhtml_function_coverage=1 00:07:52.290 --rc genhtml_legend=1 00:07:52.290 --rc geninfo_all_blocks=1 00:07:52.290 --rc geninfo_unexecuted_blocks=1 00:07:52.290 00:07:52.290 ' 00:07:52.290 23:13:16 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:52.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.290 --rc genhtml_branch_coverage=1 00:07:52.290 --rc genhtml_function_coverage=1 00:07:52.290 --rc genhtml_legend=1 00:07:52.290 --rc geninfo_all_blocks=1 00:07:52.290 --rc geninfo_unexecuted_blocks=1 00:07:52.290 00:07:52.290 ' 00:07:52.290 23:13:16 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:52.860 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:53.429 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:53.429 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:53.429 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:53.429 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:53.690 23:13:17 nvme -- nvme/nvme.sh@79 -- # uname 00:07:53.690 23:13:17 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:53.690 23:13:17 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:53.690 23:13:17 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:53.690 23:13:17 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:53.690 23:13:17 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:53.690 23:13:17 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:53.690 Waiting for stub to ready for secondary processes... 00:07:53.690 23:13:17 nvme -- common/autotest_common.sh@1075 -- # stubpid=74334 00:07:53.690 23:13:17 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:53.690 23:13:17 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:53.690 23:13:17 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/74334 ]] 00:07:53.690 23:13:17 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:53.690 23:13:17 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:53.690 [2024-11-17 23:13:17.338716] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:07:53.690 [2024-11-17 23:13:17.338854] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:54.633 23:13:18 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:54.633 23:13:18 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/74334 ]] 00:07:54.633 23:13:18 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:54.633 [2024-11-17 23:13:18.448361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:54.893 [2024-11-17 23:13:18.466956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.893 [2024-11-17 23:13:18.467131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:54.893 [2024-11-17 23:13:18.467233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.893 [2024-11-17 23:13:18.479004] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:54.893 [2024-11-17 23:13:18.479045] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:54.893 [2024-11-17 23:13:18.493061] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:54.893 [2024-11-17 23:13:18.493266] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:54.893 [2024-11-17 23:13:18.495063] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:54.893 [2024-11-17 23:13:18.495238] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:54.893 [2024-11-17 23:13:18.495317] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:54.893 [2024-11-17 23:13:18.495869] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:54.893 [2024-11-17 23:13:18.496042] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:54.893 [2024-11-17 23:13:18.496092] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:54.893 [2024-11-17 23:13:18.497498] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:54.893 [2024-11-17 23:13:18.497688] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:54.893 [2024-11-17 23:13:18.497749] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:54.893 [2024-11-17 23:13:18.497791] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:54.893 [2024-11-17 23:13:18.497829] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:55.835 done. 00:07:55.835 23:13:19 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:55.835 23:13:19 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:55.835 23:13:19 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:55.835 23:13:19 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:55.835 23:13:19 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.835 23:13:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.835 ************************************ 00:07:55.835 START TEST nvme_reset 00:07:55.835 ************************************ 00:07:55.835 23:13:19 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:55.835 Initializing NVMe Controllers 00:07:55.835 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:55.835 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:55.835 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:55.835 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:55.835 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:55.835 00:07:55.835 real 0m0.193s 00:07:55.835 user 0m0.063s 00:07:55.835 sys 0m0.084s 00:07:55.835 23:13:19 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.835 ************************************ 00:07:55.835 END TEST nvme_reset 00:07:55.835 ************************************ 00:07:55.835 23:13:19 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:55.835 23:13:19 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:55.835 23:13:19 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:55.835 23:13:19 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.835 23:13:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.835 ************************************ 00:07:55.835 START TEST nvme_identify 00:07:55.835 ************************************ 00:07:55.835 23:13:19 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:55.835 23:13:19 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:55.835 23:13:19 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:55.835 23:13:19 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:55.835 23:13:19 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:55.835 23:13:19 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:55.835 23:13:19 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:55.835 23:13:19 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:55.835 23:13:19 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:55.835 23:13:19 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:55.835 23:13:19 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:55.835 23:13:19 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:55.835 23:13:19 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:56.112 [2024-11-17 23:13:19.791421] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 74367 terminated unexpected 00:07:56.112 ===================================================== 00:07:56.112 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:56.112 ===================================================== 00:07:56.112 Controller Capabilities/Features 00:07:56.112 ================================ 00:07:56.112 Vendor ID: 1b36 00:07:56.112 Subsystem Vendor ID: 1af4 00:07:56.112 Serial Number: 12343 00:07:56.112 Model Number: QEMU NVMe Ctrl 00:07:56.112 Firmware Version: 8.0.0 00:07:56.112 Recommended Arb Burst: 6 00:07:56.112 IEEE OUI Identifier: 00 54 52 00:07:56.112 Multi-path I/O 00:07:56.112 May have multiple subsystem ports: No 00:07:56.112 May have multiple controllers: Yes 00:07:56.112 Associated with SR-IOV VF: No 00:07:56.112 Max Data Transfer Size: 524288 00:07:56.112 Max Number of Namespaces: 256 00:07:56.112 Max Number of I/O Queues: 64 00:07:56.112 NVMe Specification Version (VS): 1.4 00:07:56.112 NVMe Specification Version (Identify): 1.4 00:07:56.112 Maximum Queue Entries: 2048 00:07:56.112 Contiguous Queues Required: Yes 00:07:56.112 Arbitration Mechanisms Supported 00:07:56.112 Weighted Round Robin: Not Supported 00:07:56.112 Vendor Specific: Not Supported 00:07:56.112 Reset Timeout: 7500 ms 00:07:56.112 Doorbell Stride: 4 bytes 00:07:56.112 NVM Subsystem Reset: Not Supported 00:07:56.112 Command Sets Supported 00:07:56.112 NVM Command Set: Supported 00:07:56.112 Boot Partition: Not Supported 00:07:56.112 Memory Page Size Minimum: 4096 bytes 00:07:56.112 Memory Page Size Maximum: 65536 bytes 00:07:56.112 Persistent Memory Region: Not Supported 00:07:56.113 Optional Asynchronous Events Supported 00:07:56.113 Namespace Attribute Notices: Supported 00:07:56.113 Firmware Activation Notices: Not Supported 00:07:56.113 ANA Change Notices: Not Supported 00:07:56.113 PLE Aggregate Log Change Notices: Not Supported 00:07:56.113 LBA Status Info Alert Notices: Not Supported 00:07:56.113 EGE Aggregate Log Change Notices: Not Supported 00:07:56.113 Normal NVM Subsystem Shutdown event: Not Supported 00:07:56.113 Zone Descriptor Change Notices: Not Supported 00:07:56.113 Discovery Log Change Notices: Not Supported 00:07:56.113 Controller Attributes 00:07:56.113 128-bit Host Identifier: Not Supported 00:07:56.113 Non-Operational Permissive Mode: Not Supported 00:07:56.113 NVM Sets: Not Supported 00:07:56.113 Read Recovery Levels: Not Supported 00:07:56.113 Endurance Groups: Supported 00:07:56.113 Predictable Latency Mode: Not Supported 00:07:56.113 Traffic Based Keep ALive: Not Supported 00:07:56.113 Namespace Granularity: Not Supported 00:07:56.113 SQ Associations: Not Supported 00:07:56.113 UUID List: Not Supported 00:07:56.113 Multi-Domain Subsystem: Not Supported 00:07:56.113 Fixed Capacity Management: Not Supported 00:07:56.113 Variable Capacity Management: Not Supported 00:07:56.113 Delete Endurance Group: Not Supported 00:07:56.113 Delete NVM Set: Not Supported 00:07:56.113 Extended LBA Formats Supported: Supported 00:07:56.113 Flexible Data Placement Supported: Supported 00:07:56.113 00:07:56.113 Controller Memory Buffer Support 00:07:56.113 ================================ 00:07:56.113 Supported: No 00:07:56.113 00:07:56.113 Persistent Memory Region Support 00:07:56.113 ================================ 00:07:56.113 Supported: No 00:07:56.113 00:07:56.113 Admin Command Set Attributes 00:07:56.113 ============================ 00:07:56.113 Security Send/Receive: Not Supported 00:07:56.113 Format NVM: Supported 00:07:56.113 Firmware Activate/Download: Not Supported 00:07:56.113 Namespace Management: Supported 00:07:56.113 Device Self-Test: Not Supported 00:07:56.113 Directives: Supported 00:07:56.113 NVMe-MI: Not Supported 00:07:56.113 Virtualization Management: Not Supported 00:07:56.113 Doorbell Buffer Config: Supported 00:07:56.113 Get LBA Status Capability: Not Supported 00:07:56.113 Command & Feature Lockdown Capability: Not Supported 00:07:56.113 Abort Command Limit: 4 00:07:56.113 Async Event Request Limit: 4 00:07:56.113 Number of Firmware Slots: N/A 00:07:56.113 Firmware Slot 1 Read-Only: N/A 00:07:56.113 Firmware Activation Without Reset: N/A 00:07:56.113 Multiple Update Detection Support: N/A 00:07:56.113 Firmware Update Granularity: No Information Provided 00:07:56.113 Per-Namespace SMART Log: Yes 00:07:56.113 Asymmetric Namespace Access Log Page: Not Supported 00:07:56.113 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:56.113 Command Effects Log Page: Supported 00:07:56.113 Get Log Page Extended Data: Supported 00:07:56.113 Telemetry Log Pages: Not Supported 00:07:56.113 Persistent Event Log Pages: Not Supported 00:07:56.113 Supported Log Pages Log Page: May Support 00:07:56.113 Commands Supported & Effects Log Page: Not Supported 00:07:56.113 Feature Identifiers & Effects Log Page:May Support 00:07:56.113 NVMe-MI Commands & Effects Log Page: May Support 00:07:56.113 Data Area 4 for Telemetry Log: Not Supported 00:07:56.113 Error Log Page Entries Supported: 1 00:07:56.113 Keep Alive: Not Supported 00:07:56.113 00:07:56.113 NVM Command Set Attributes 00:07:56.113 ========================== 00:07:56.113 Submission Queue Entry Size 00:07:56.113 Max: 64 00:07:56.113 Min: 64 00:07:56.113 Completion Queue Entry Size 00:07:56.113 Max: 16 00:07:56.113 Min: 16 00:07:56.113 Number of Namespaces: 256 00:07:56.113 Compare Command: Supported 00:07:56.113 Write Uncorrectable Command: Not Supported 00:07:56.113 Dataset Management Command: Supported 00:07:56.113 Write Zeroes Command: Supported 00:07:56.113 Set Features Save Field: Supported 00:07:56.113 Reservations: Not Supported 00:07:56.113 Timestamp: Supported 00:07:56.113 Copy: Supported 00:07:56.113 Volatile Write Cache: Present 00:07:56.113 Atomic Write Unit (Normal): 1 00:07:56.113 Atomic Write Unit (PFail): 1 00:07:56.113 Atomic Compare & Write Unit: 1 00:07:56.113 Fused Compare & Write: Not Supported 00:07:56.113 Scatter-Gather List 00:07:56.113 SGL Command Set: Supported 00:07:56.113 SGL Keyed: Not Supported 00:07:56.113 SGL Bit Bucket Descriptor: Not Supported 00:07:56.113 SGL Metadata Pointer: Not Supported 00:07:56.113 Oversized SGL: Not Supported 00:07:56.113 SGL Metadata Address: Not Supported 00:07:56.113 SGL Offset: Not Supported 00:07:56.113 Transport SGL Data Block: Not Supported 00:07:56.113 Replay Protected Memory Block: Not Supported 00:07:56.113 00:07:56.113 Firmware Slot Information 00:07:56.113 ========================= 00:07:56.113 Active slot: 1 00:07:56.113 Slot 1 Firmware Revision: 1.0 00:07:56.113 00:07:56.113 00:07:56.113 Commands Supported and Effects 00:07:56.113 ============================== 00:07:56.113 Admin Commands 00:07:56.113 -------------- 00:07:56.113 Delete I/O Submission Queue (00h): Supported 00:07:56.113 Create I/O Submission Queue (01h): Supported 00:07:56.113 Get Log Page (02h): Supported 00:07:56.113 Delete I/O Completion Queue (04h): Supported 00:07:56.113 Create I/O Completion Queue (05h): Supported 00:07:56.113 Identify (06h): Supported 00:07:56.113 Abort (08h): Supported 00:07:56.113 Set Features (09h): Supported 00:07:56.113 Get Features (0Ah): Supported 00:07:56.113 Asynchronous Event Request (0Ch): Supported 00:07:56.113 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:56.113 Directive Send (19h): Supported 00:07:56.113 Directive Receive (1Ah): Supported 00:07:56.113 Virtualization Management (1Ch): Supported 00:07:56.113 Doorbell Buffer Config (7Ch): Supported 00:07:56.114 Format NVM (80h): Supported LBA-Change 00:07:56.114 I/O Commands 00:07:56.114 ------------ 00:07:56.114 Flush (00h): Supported LBA-Change 00:07:56.114 Write (01h): Supported LBA-Change 00:07:56.114 Read (02h): Supported 00:07:56.114 Compare (05h): Supported 00:07:56.114 Write Zeroes (08h): Supported LBA-Change 00:07:56.114 Dataset Management (09h): Supported LBA-Change 00:07:56.114 Unknown (0Ch): Supported 00:07:56.114 Unknown (12h): Supported 00:07:56.114 Copy (19h): Supported LBA-Change 00:07:56.114 Unknown (1Dh): Supported LBA-Change 00:07:56.114 00:07:56.114 Error Log 00:07:56.114 ========= 00:07:56.114 00:07:56.114 Arbitration 00:07:56.114 =========== 00:07:56.114 Arbitration Burst: no limit 00:07:56.114 00:07:56.114 Power Management 00:07:56.114 ================ 00:07:56.114 Number of Power States: 1 00:07:56.114 Current Power State: Power State #0 00:07:56.114 Power State #0: 00:07:56.114 Max Power: 25.00 W 00:07:56.114 Non-Operational State: Operational 00:07:56.114 Entry Latency: 16 microseconds 00:07:56.114 Exit Latency: 4 microseconds 00:07:56.114 Relative Read Throughput: 0 00:07:56.114 Relative Read Latency: 0 00:07:56.114 Relative Write Throughput: 0 00:07:56.114 Relative Write Latency: 0 00:07:56.114 Idle Power: Not Reported 00:07:56.114 Active Power: Not Reported 00:07:56.114 Non-Operational Permissive Mode: Not Supported 00:07:56.114 00:07:56.114 Health Information 00:07:56.114 ================== 00:07:56.114 Critical Warnings: 00:07:56.114 Available Spare Space: OK 00:07:56.114 Temperature: OK 00:07:56.114 Device Reliability: OK 00:07:56.114 Read Only: No 00:07:56.114 Volatile Memory Backup: OK 00:07:56.114 Current Temperature: 323 Kelvin (50 Celsius) 00:07:56.114 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:56.114 Available Spare: 0% 00:07:56.114 Available Spare Threshold: 0% 00:07:56.114 Life Percentage Used: 0% 00:07:56.114 Data Units Read: 916 00:07:56.114 Data Units Written: 845 00:07:56.114 Host Read Commands: 39033 00:07:56.114 Host Write Commands: 38456 00:07:56.114 Controller Busy Time: 0 minutes 00:07:56.114 Power Cycles: 0 00:07:56.114 Power On Hours: 0 hours 00:07:56.114 Unsafe Shutdowns: 0 00:07:56.114 Unrecoverable Media Errors: 0 00:07:56.114 Lifetime Error Log Entries: 0 00:07:56.114 Warning Temperature Time: 0 minutes 00:07:56.114 Critical Temperature Time: 0 minutes 00:07:56.114 00:07:56.114 Number of Queues 00:07:56.114 ================ 00:07:56.114 Number of I/O Submission Queues: 64 00:07:56.114 Number of I/O Completion Queues: 64 00:07:56.114 00:07:56.114 ZNS Specific Controller Data 00:07:56.114 ============================ 00:07:56.114 Zone Append Size Limit: 0 00:07:56.114 00:07:56.114 00:07:56.114 Active Namespaces 00:07:56.114 ================= 00:07:56.114 Namespace ID:1 00:07:56.114 Error Recovery Timeout: Unlimited 00:07:56.114 Command Set Identifier: NVM (00h) 00:07:56.114 Deallocate: Supported 00:07:56.114 Deallocated/Unwritten Error: Supported 00:07:56.114 Deallocated Read Value: All 0x00 00:07:56.114 Deallocate in Write Zeroes: Not Supported 00:07:56.114 Deallocated Guard Field: 0xFFFF 00:07:56.114 Flush: Supported 00:07:56.114 Reservation: Not Supported 00:07:56.114 Namespace Sharing Capabilities: Multiple Controllers 00:07:56.114 Size (in LBAs): 262144 (1GiB) 00:07:56.114 Capacity (in LBAs): 262144 (1GiB) 00:07:56.114 Utilization (in LBAs): 262144 (1GiB) 00:07:56.114 Thin Provisioning: Not Supported 00:07:56.114 Per-NS Atomic Units: No 00:07:56.114 Maximum Single Source Range Length: 128 00:07:56.114 Maximum Copy Length: 128 00:07:56.114 Maximum Source Range Count: 128 00:07:56.114 NGUID/EUI64 Never Reused: No 00:07:56.114 Namespace Write Protected: No 00:07:56.114 Endurance group ID: 1 00:07:56.114 Number of LBA Formats: 8 00:07:56.114 Current LBA Format: LBA Format #04 00:07:56.114 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:56.114 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:56.114 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:56.114 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:56.114 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:56.114 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:56.114 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:56.114 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:56.114 00:07:56.114 Get Feature FDP: 00:07:56.114 ================ 00:07:56.114 Enabled: Yes 00:07:56.114 FDP configuration index: 0 00:07:56.114 00:07:56.114 FDP configurations log page 00:07:56.114 =========================== 00:07:56.114 Number of FDP configurations: 1 00:07:56.114 Version: 0 00:07:56.114 Size: 112 00:07:56.114 FDP Configuration Descriptor: 0 00:07:56.114 Descriptor Size: 96 00:07:56.114 Reclaim Group Identifier format: 2 00:07:56.114 FDP Volatile Write Cache: Not Present 00:07:56.114 FDP Configuration: Valid 00:07:56.114 Vendor Specific Size: 0 00:07:56.114 Number of Reclaim Groups: 2 00:07:56.114 Number of Recalim Unit Handles: 8 00:07:56.114 Max Placement Identifiers: 128 00:07:56.114 Number of Namespaces Suppprted: 256 00:07:56.114 Reclaim unit Nominal Size: 6000000 bytes 00:07:56.114 Estimated Reclaim Unit Time Limit: Not Reported 00:07:56.114 RUH Desc #000: RUH Type: Initially Isolated 00:07:56.114 RUH Desc #001: RUH Type: Initially Isolated 00:07:56.114 RUH Desc #002: RUH Type: Initially Isolated 00:07:56.114 RUH Desc #003: RUH Type: Initially Isolated 00:07:56.115 RUH Desc #004: RUH Type: Initially Isolated 00:07:56.115 RUH Desc #005: RUH Type: Initially Isolated 00:07:56.115 RUH Desc #006: RUH Type: Initially Isolated 00:07:56.115 RUH Desc #007: RUH Type: Initially Isolated 00:07:56.115 00:07:56.115 FDP reclaim unit handle usage log page 00:07:56.115 ==================================[2024-11-17 23:13:19.794608] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 74367 terminated unexpected 00:07:56.115 ==== 00:07:56.115 Number of Reclaim Unit Handles: 8 00:07:56.115 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:56.115 RUH Usage Desc #001: RUH Attributes: Unused 00:07:56.115 RUH Usage Desc #002: RUH Attributes: Unused 00:07:56.115 RUH Usage Desc #003: RUH Attributes: Unused 00:07:56.115 RUH Usage Desc #004: RUH Attributes: Unused 00:07:56.115 RUH Usage Desc #005: RUH Attributes: Unused 00:07:56.115 RUH Usage Desc #006: RUH Attributes: Unused 00:07:56.115 RUH Usage Desc #007: RUH Attributes: Unused 00:07:56.115 00:07:56.115 FDP statistics log page 00:07:56.115 ======================= 00:07:56.115 Host bytes with metadata written: 509452288 00:07:56.115 Media bytes with metadata written: 509509632 00:07:56.115 Media bytes erased: 0 00:07:56.115 00:07:56.115 FDP events log page 00:07:56.115 =================== 00:07:56.115 Number of FDP events: 0 00:07:56.115 00:07:56.115 NVM Specific Namespace Data 00:07:56.115 =========================== 00:07:56.115 Logical Block Storage Tag Mask: 0 00:07:56.115 Protection Information Capabilities: 00:07:56.115 16b Guard Protection Information Storage Tag Support: No 00:07:56.115 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:56.115 Storage Tag Check Read Support: No 00:07:56.115 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.115 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.115 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.115 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.115 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.115 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.115 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.115 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.115 ===================================================== 00:07:56.115 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:56.115 ===================================================== 00:07:56.115 Controller Capabilities/Features 00:07:56.115 ================================ 00:07:56.115 Vendor ID: 1b36 00:07:56.115 Subsystem Vendor ID: 1af4 00:07:56.115 Serial Number: 12340 00:07:56.115 Model Number: QEMU NVMe Ctrl 00:07:56.115 Firmware Version: 8.0.0 00:07:56.115 Recommended Arb Burst: 6 00:07:56.115 IEEE OUI Identifier: 00 54 52 00:07:56.115 Multi-path I/O 00:07:56.115 May have multiple subsystem ports: No 00:07:56.115 May have multiple controllers: No 00:07:56.115 Associated with SR-IOV VF: No 00:07:56.115 Max Data Transfer Size: 524288 00:07:56.115 Max Number of Namespaces: 256 00:07:56.115 Max Number of I/O Queues: 64 00:07:56.115 NVMe Specification Version (VS): 1.4 00:07:56.115 NVMe Specification Version (Identify): 1.4 00:07:56.115 Maximum Queue Entries: 2048 00:07:56.115 Contiguous Queues Required: Yes 00:07:56.115 Arbitration Mechanisms Supported 00:07:56.115 Weighted Round Robin: Not Supported 00:07:56.115 Vendor Specific: Not Supported 00:07:56.115 Reset Timeout: 7500 ms 00:07:56.115 Doorbell Stride: 4 bytes 00:07:56.115 NVM Subsystem Reset: Not Supported 00:07:56.115 Command Sets Supported 00:07:56.115 NVM Command Set: Supported 00:07:56.115 Boot Partition: Not Supported 00:07:56.115 Memory Page Size Minimum: 4096 bytes 00:07:56.115 Memory Page Size Maximum: 65536 bytes 00:07:56.115 Persistent Memory Region: Not Supported 00:07:56.115 Optional Asynchronous Events Supported 00:07:56.115 Namespace Attribute Notices: Supported 00:07:56.115 Firmware Activation Notices: Not Supported 00:07:56.115 ANA Change Notices: Not Supported 00:07:56.115 PLE Aggregate Log Change Notices: Not Supported 00:07:56.115 LBA Status Info Alert Notices: Not Supported 00:07:56.115 EGE Aggregate Log Change Notices: Not Supported 00:07:56.115 Normal NVM Subsystem Shutdown event: Not Supported 00:07:56.115 Zone Descriptor Change Notices: Not Supported 00:07:56.115 Discovery Log Change Notices: Not Supported 00:07:56.115 Controller Attributes 00:07:56.115 128-bit Host Identifier: Not Supported 00:07:56.115 Non-Operational Permissive Mode: Not Supported 00:07:56.115 NVM Sets: Not Supported 00:07:56.115 Read Recovery Levels: Not Supported 00:07:56.115 Endurance Groups: Not Supported 00:07:56.115 Predictable Latency Mode: Not Supported 00:07:56.115 Traffic Based Keep ALive: Not Supported 00:07:56.115 Namespace Granularity: Not Supported 00:07:56.115 SQ Associations: Not Supported 00:07:56.115 UUID List: Not Supported 00:07:56.115 Multi-Domain Subsystem: Not Supported 00:07:56.115 Fixed Capacity Management: Not Supported 00:07:56.115 Variable Capacity Management: Not Supported 00:07:56.115 Delete Endurance Group: Not Supported 00:07:56.115 Delete NVM Set: Not Supported 00:07:56.115 Extended LBA Formats Supported: Supported 00:07:56.115 Flexible Data Placement Supported: Not Supported 00:07:56.115 00:07:56.115 Controller Memory Buffer Support 00:07:56.115 ================================ 00:07:56.115 Supported: No 00:07:56.115 00:07:56.115 Persistent Memory Region Support 00:07:56.115 ================================ 00:07:56.115 Supported: No 00:07:56.115 00:07:56.115 Admin Command Set Attributes 00:07:56.115 ============================ 00:07:56.115 Security Send/Receive: Not Supported 00:07:56.115 Format NVM: Supported 00:07:56.115 Firmware Activate/Download: Not Supported 00:07:56.115 Namespace Management: Supported 00:07:56.115 Device Self-Test: Not Supported 00:07:56.116 Directives: Supported 00:07:56.116 NVMe-MI: Not Supported 00:07:56.116 Virtualization Management: Not Supported 00:07:56.116 Doorbell Buffer Config: Supported 00:07:56.116 Get LBA Status Capability: Not Supported 00:07:56.116 Command & Feature Lockdown Capability: Not Supported 00:07:56.116 Abort Command Limit: 4 00:07:56.116 Async Event Request Limit: 4 00:07:56.116 Number of Firmware Slots: N/A 00:07:56.116 Firmware Slot 1 Read-Only: N/A 00:07:56.116 Firmware Activation Without Reset: N/A 00:07:56.116 Multiple Update Detection Support: N/A 00:07:56.116 Firmware Update Granularity: No Information Provided 00:07:56.116 Per-Namespace SMART Log: Yes 00:07:56.116 Asymmetric Namespace Access Log Page: Not Supported 00:07:56.116 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:56.116 Command Effects Log Page: Supported 00:07:56.116 Get Log Page Extended Data: Supported 00:07:56.116 Telemetry Log Pages: Not Supported 00:07:56.116 Persistent Event Log Pages: Not Supported 00:07:56.116 Supported Log Pages Log Page: May Support 00:07:56.116 Commands Supported & Effects Log Page: Not Supported 00:07:56.116 Feature Identifiers & Effects Log Page:May Support 00:07:56.116 NVMe-MI Commands & Effects Log Page: May Support 00:07:56.116 Data Area 4 for Telemetry Log: Not Supported 00:07:56.116 Error Log Page Entries Supported: 1 00:07:56.116 Keep Alive: Not Supported 00:07:56.116 00:07:56.116 NVM Command Set Attributes 00:07:56.116 ========================== 00:07:56.116 Submission Queue Entry Size 00:07:56.116 Max: 64 00:07:56.116 Min: 64 00:07:56.116 Completion Queue Entry Size 00:07:56.116 Max: 16 00:07:56.116 Min: 16 00:07:56.116 Number of Namespaces: 256 00:07:56.116 Compare Command: Supported 00:07:56.116 Write Uncorrectable Command: Not Supported 00:07:56.116 Dataset Management Command: Supported 00:07:56.116 Write Zeroes Command: Supported 00:07:56.116 Set Features Save Field: Supported 00:07:56.116 Reservations: Not Supported 00:07:56.116 Timestamp: Supported 00:07:56.116 Copy: Supported 00:07:56.116 Volatile Write Cache: Present 00:07:56.116 Atomic Write Unit (Normal): 1 00:07:56.116 Atomic Write Unit (PFail): 1 00:07:56.116 Atomic Compare & Write Unit: 1 00:07:56.116 Fused Compare & Write: Not Supported 00:07:56.116 Scatter-Gather List 00:07:56.116 SGL Command Set: Supported 00:07:56.116 SGL Keyed: Not Supported 00:07:56.116 SGL Bit Bucket Descriptor: Not Supported 00:07:56.116 SGL Metadata Pointer: Not Supported 00:07:56.116 Oversized SGL: Not Supported 00:07:56.116 SGL Metadata Address: Not Supported 00:07:56.116 SGL Offset: Not Supported 00:07:56.116 Transport SGL Data Block: Not Supported 00:07:56.116 Replay Protected Memory Block: Not Supported 00:07:56.116 00:07:56.116 Firmware Slot Information 00:07:56.116 ========================= 00:07:56.116 Active slot: 1 00:07:56.116 Slot 1 Firmware Revision: 1.0 00:07:56.116 00:07:56.116 00:07:56.116 Commands Supported and Effects 00:07:56.116 ============================== 00:07:56.116 Admin Commands 00:07:56.116 -------------- 00:07:56.116 Delete I/O Submission Queue (00h): Supported 00:07:56.116 Create I/O Submission Queue (01h): Supported 00:07:56.116 Get Log Page (02h): Supported 00:07:56.116 Delete I/O Completion Queue (04h): Supported 00:07:56.116 Create I/O Completion Queue (05h): Supported 00:07:56.116 Identify (06h): Supported 00:07:56.116 Abort (08h): Supported 00:07:56.116 Set Features (09h): Supported 00:07:56.116 Get Features (0Ah): Supported 00:07:56.116 Asynchronous Event Request (0Ch): Supported 00:07:56.116 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:56.116 Directive Send (19h): Supported 00:07:56.116 Directive Receive (1Ah): Supported 00:07:56.116 Virtualization Management (1Ch): Supported 00:07:56.116 Doorbell Buffer Config (7Ch): Supported 00:07:56.116 Format NVM (80h): Supported LBA-Change 00:07:56.116 I/O Commands 00:07:56.116 ------------ 00:07:56.116 Flush (00h): Supported LBA-Change 00:07:56.116 Write (01h): Supported LBA-Change 00:07:56.116 Read (02h): Supported 00:07:56.116 Compare (05h): Supported 00:07:56.116 Write Zeroes (08h): Supported LBA-Change 00:07:56.116 Dataset Management (09h): Supported LBA-Change 00:07:56.116 Unknown (0Ch): Supported 00:07:56.116 Unknown (12h): Supported 00:07:56.116 Copy (19h): Supported LBA-Change 00:07:56.116 Unknown (1Dh): Supported LBA-Change 00:07:56.116 00:07:56.116 Error Log 00:07:56.116 ========= 00:07:56.116 00:07:56.116 Arbitration 00:07:56.116 =========== 00:07:56.116 Arbitration Burst: no limit 00:07:56.116 00:07:56.116 Power Management 00:07:56.116 ================ 00:07:56.116 Number of Power States: 1 00:07:56.116 Current Power State: Power State #0 00:07:56.116 Power State #0: 00:07:56.116 Max Power: 25.00 W 00:07:56.116 Non-Operational State: Operational 00:07:56.116 Entry Latency: 16 microseconds 00:07:56.116 Exit Latency: 4 microseconds 00:07:56.116 Relative Read Throughput: 0 00:07:56.116 Relative Read Latency: 0 00:07:56.116 Relative Write Throughput: 0 00:07:56.116 Relative Write Latency: 0 00:07:56.116 Idle Power: Not Reported 00:07:56.116 Active Power: Not Reported 00:07:56.116 Non-Operational Permissive Mode: Not Supported 00:07:56.116 00:07:56.116 Health Information 00:07:56.116 ================== 00:07:56.116 Critical Warnings: 00:07:56.116 Available Spare Space: OK 00:07:56.116 Temperature: OK 00:07:56.116 Device Reliability: OK 00:07:56.116 Read Only: No 00:07:56.116 Volatile Memory Backup: OK 00:07:56.117 Current Temperature: 323 Kelvin (50 Celsius) 00:07:56.117 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:56.117 Available Spare: 0% 00:07:56.117 Available Spare Threshold: 0% 00:07:56.117 Life Percentage Used: 0% 00:07:56.117 Data Units Read: 685 00:07:56.117 Data Units Written: 613 00:07:56.117 Host Read Commands: 36753 00:07:56.117 Host Write Commands: 36539 00:07:56.117 Controller Busy Time: 0 minutes 00:07:56.117 Power Cycles: 0 00:07:56.117 Power On Hours: 0 hours 00:07:56.117 Unsafe Shutdowns: 0 00:07:56.117 Unrecoverable Media Errors: 0 00:07:56.117 Lifetime Error Log Entries: 0 00:07:56.117 Warning Temperature Time: 0 minutes 00:07:56.117 Critical Temperature Time: 0 minutes 00:07:56.117 00:07:56.117 Number of Queues 00:07:56.117 ================ 00:07:56.117 Number of I/O Submission Queues: 64 00:07:56.117 Number of I/O Completion Queues: 64 00:07:56.117 00:07:56.117 ZNS Specific Controller Data 00:07:56.117 ============================ 00:07:56.117 Zone Append Size Limit: 0 00:07:56.117 00:07:56.117 00:07:56.117 Active Namespaces 00:07:56.117 ================= 00:07:56.117 Namespace ID:1 00:07:56.117 Error Recovery Timeout: Unlimited 00:07:56.117 Command Set Identifier: NVM (00h) 00:07:56.117 Deallocate: Supported 00:07:56.117 Deallocated/Unwritten Error: Supported 00:07:56.117 Deallocated Read Value: All 0x00 00:07:56.117 Deallocate in Write Zeroes: Not Supported 00:07:56.117 Deallocated Guard Field: 0xFFFF 00:07:56.117 Flush: Supported 00:07:56.117 Reservation: Not Supported 00:07:56.117 Metadata Transferred as: Separate Metadata Buffer 00:07:56.117 Namespace Sharing Capabilities: Private 00:07:56.117 Size (in LBAs): 1548666 (5GiB) 00:07:56.117 Capacity (in LBAs): 1548666 (5GiB) 00:07:56.117 Utilization (in LBAs): 1548666 (5GiB) 00:07:56.117 Thin Provisioning: Not Supported 00:07:56.117 Per-NS Atomic Units: No 00:07:56.117 Maximum Single Source Range Length: 128 00:07:56.117 Maximum Copy Length: 128 00:07:56.117 Maximum Source Range Count: 128 00:07:56.117 NGUID/EUI64 Never Reused: No 00:07:56.117 Namespace Write Protected: No 00:07:56.117 Number of LBA Formats: 8 00:07:56.117 Current LBA Format: [2024-11-17 23:13:19.796857] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 74367 terminated unexpected 00:07:56.117 LBA Format #07 00:07:56.117 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:56.117 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:56.117 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:56.117 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:56.117 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:56.117 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:56.117 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:56.117 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:56.117 00:07:56.117 NVM Specific Namespace Data 00:07:56.117 =========================== 00:07:56.117 Logical Block Storage Tag Mask: 0 00:07:56.117 Protection Information Capabilities: 00:07:56.117 16b Guard Protection Information Storage Tag Support: No 00:07:56.117 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:56.117 Storage Tag Check Read Support: No 00:07:56.117 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.117 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.117 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.117 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.117 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.117 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.117 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.117 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.117 ===================================================== 00:07:56.117 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:56.117 ===================================================== 00:07:56.117 Controller Capabilities/Features 00:07:56.117 ================================ 00:07:56.117 Vendor ID: 1b36 00:07:56.117 Subsystem Vendor ID: 1af4 00:07:56.117 Serial Number: 12341 00:07:56.117 Model Number: QEMU NVMe Ctrl 00:07:56.117 Firmware Version: 8.0.0 00:07:56.117 Recommended Arb Burst: 6 00:07:56.117 IEEE OUI Identifier: 00 54 52 00:07:56.117 Multi-path I/O 00:07:56.117 May have multiple subsystem ports: No 00:07:56.117 May have multiple controllers: No 00:07:56.117 Associated with SR-IOV VF: No 00:07:56.117 Max Data Transfer Size: 524288 00:07:56.117 Max Number of Namespaces: 256 00:07:56.117 Max Number of I/O Queues: 64 00:07:56.117 NVMe Specification Version (VS): 1.4 00:07:56.117 NVMe Specification Version (Identify): 1.4 00:07:56.117 Maximum Queue Entries: 2048 00:07:56.117 Contiguous Queues Required: Yes 00:07:56.117 Arbitration Mechanisms Supported 00:07:56.117 Weighted Round Robin: Not Supported 00:07:56.117 Vendor Specific: Not Supported 00:07:56.117 Reset Timeout: 7500 ms 00:07:56.117 Doorbell Stride: 4 bytes 00:07:56.117 NVM Subsystem Reset: Not Supported 00:07:56.117 Command Sets Supported 00:07:56.117 NVM Command Set: Supported 00:07:56.117 Boot Partition: Not Supported 00:07:56.117 Memory Page Size Minimum: 4096 bytes 00:07:56.117 Memory Page Size Maximum: 65536 bytes 00:07:56.117 Persistent Memory Region: Not Supported 00:07:56.117 Optional Asynchronous Events Supported 00:07:56.117 Namespace Attribute Notices: Supported 00:07:56.117 Firmware Activation Notices: Not Supported 00:07:56.117 ANA Change Notices: Not Supported 00:07:56.117 PLE Aggregate Log Change Notices: Not Supported 00:07:56.117 LBA Status Info Alert Notices: Not Supported 00:07:56.117 EGE Aggregate Log Change Notices: Not Supported 00:07:56.117 Normal NVM Subsystem Shutdown event: Not Supported 00:07:56.118 Zone Descriptor Change Notices: Not Supported 00:07:56.118 Discovery Log Change Notices: Not Supported 00:07:56.118 Controller Attributes 00:07:56.118 128-bit Host Identifier: Not Supported 00:07:56.118 Non-Operational Permissive Mode: Not Supported 00:07:56.118 NVM Sets: Not Supported 00:07:56.118 Read Recovery Levels: Not Supported 00:07:56.118 Endurance Groups: Not Supported 00:07:56.118 Predictable Latency Mode: Not Supported 00:07:56.118 Traffic Based Keep ALive: Not Supported 00:07:56.118 Namespace Granularity: Not Supported 00:07:56.118 SQ Associations: Not Supported 00:07:56.118 UUID List: Not Supported 00:07:56.118 Multi-Domain Subsystem: Not Supported 00:07:56.118 Fixed Capacity Management: Not Supported 00:07:56.118 Variable Capacity Management: Not Supported 00:07:56.118 Delete Endurance Group: Not Supported 00:07:56.118 Delete NVM Set: Not Supported 00:07:56.118 Extended LBA Formats Supported: Supported 00:07:56.118 Flexible Data Placement Supported: Not Supported 00:07:56.118 00:07:56.118 Controller Memory Buffer Support 00:07:56.118 ================================ 00:07:56.118 Supported: No 00:07:56.118 00:07:56.118 Persistent Memory Region Support 00:07:56.118 ================================ 00:07:56.118 Supported: No 00:07:56.118 00:07:56.118 Admin Command Set Attributes 00:07:56.118 ============================ 00:07:56.118 Security Send/Receive: Not Supported 00:07:56.118 Format NVM: Supported 00:07:56.118 Firmware Activate/Download: Not Supported 00:07:56.118 Namespace Management: Supported 00:07:56.118 Device Self-Test: Not Supported 00:07:56.118 Directives: Supported 00:07:56.118 NVMe-MI: Not Supported 00:07:56.118 Virtualization Management: Not Supported 00:07:56.118 Doorbell Buffer Config: Supported 00:07:56.118 Get LBA Status Capability: Not Supported 00:07:56.118 Command & Feature Lockdown Capability: Not Supported 00:07:56.118 Abort Command Limit: 4 00:07:56.118 Async Event Request Limit: 4 00:07:56.118 Number of Firmware Slots: N/A 00:07:56.118 Firmware Slot 1 Read-Only: N/A 00:07:56.118 Firmware Activation Without Reset: N/A 00:07:56.118 Multiple Update Detection Support: N/A 00:07:56.118 Firmware Update Granularity: No Information Provided 00:07:56.118 Per-Namespace SMART Log: Yes 00:07:56.118 Asymmetric Namespace Access Log Page: Not Supported 00:07:56.118 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:56.118 Command Effects Log Page: Supported 00:07:56.118 Get Log Page Extended Data: Supported 00:07:56.118 Telemetry Log Pages: Not Supported 00:07:56.118 Persistent Event Log Pages: Not Supported 00:07:56.118 Supported Log Pages Log Page: May Support 00:07:56.118 Commands Supported & Effects Log Page: Not Supported 00:07:56.118 Feature Identifiers & Effects Log Page:May Support 00:07:56.118 NVMe-MI Commands & Effects Log Page: May Support 00:07:56.118 Data Area 4 for Telemetry Log: Not Supported 00:07:56.118 Error Log Page Entries Supported: 1 00:07:56.118 Keep Alive: Not Supported 00:07:56.118 00:07:56.118 NVM Command Set Attributes 00:07:56.118 ========================== 00:07:56.118 Submission Queue Entry Size 00:07:56.118 Max: 64 00:07:56.118 Min: 64 00:07:56.118 Completion Queue Entry Size 00:07:56.118 Max: 16 00:07:56.118 Min: 16 00:07:56.118 Number of Namespaces: 256 00:07:56.118 Compare Command: Supported 00:07:56.118 Write Uncorrectable Command: Not Supported 00:07:56.118 Dataset Management Command: Supported 00:07:56.118 Write Zeroes Command: Supported 00:07:56.118 Set Features Save Field: Supported 00:07:56.118 Reservations: Not Supported 00:07:56.118 Timestamp: Supported 00:07:56.118 Copy: Supported 00:07:56.118 Volatile Write Cache: Present 00:07:56.118 Atomic Write Unit (Normal): 1 00:07:56.118 Atomic Write Unit (PFail): 1 00:07:56.118 Atomic Compare & Write Unit: 1 00:07:56.118 Fused Compare & Write: Not Supported 00:07:56.118 Scatter-Gather List 00:07:56.118 SGL Command Set: Supported 00:07:56.118 SGL Keyed: Not Supported 00:07:56.118 SGL Bit Bucket Descriptor: Not Supported 00:07:56.118 SGL Metadata Pointer: Not Supported 00:07:56.118 Oversized SGL: Not Supported 00:07:56.118 SGL Metadata Address: Not Supported 00:07:56.118 SGL Offset: Not Supported 00:07:56.118 Transport SGL Data Block: Not Supported 00:07:56.118 Replay Protected Memory Block: Not Supported 00:07:56.118 00:07:56.118 Firmware Slot Information 00:07:56.118 ========================= 00:07:56.118 Active slot: 1 00:07:56.118 Slot 1 Firmware Revision: 1.0 00:07:56.118 00:07:56.118 00:07:56.118 Commands Supported and Effects 00:07:56.118 ============================== 00:07:56.118 Admin Commands 00:07:56.118 -------------- 00:07:56.118 Delete I/O Submission Queue (00h): Supported 00:07:56.118 Create I/O Submission Queue (01h): Supported 00:07:56.118 Get Log Page (02h): Supported 00:07:56.118 Delete I/O Completion Queue (04h): Supported 00:07:56.118 Create I/O Completion Queue (05h): Supported 00:07:56.118 Identify (06h): Supported 00:07:56.118 Abort (08h): Supported 00:07:56.118 Set Features (09h): Supported 00:07:56.118 Get Features (0Ah): Supported 00:07:56.118 Asynchronous Event Request (0Ch): Supported 00:07:56.118 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:56.118 Directive Send (19h): Supported 00:07:56.118 Directive Receive (1Ah): Supported 00:07:56.118 Virtualization Management (1Ch): Supported 00:07:56.118 Doorbell Buffer Config (7Ch): Supported 00:07:56.119 Format NVM (80h): Supported LBA-Change 00:07:56.119 I/O Commands 00:07:56.119 ------------ 00:07:56.119 Flush (00h): Supported LBA-Change 00:07:56.119 Write (01h): Supported LBA-Change 00:07:56.119 Read (02h): Supported 00:07:56.119 Compare (05h): Supported 00:07:56.119 Write Zeroes (08h): Supported LBA-Change 00:07:56.119 Dataset Management (09h): Supported LBA-Change 00:07:56.119 Unknown (0Ch): Supported 00:07:56.119 Unknown (12h): Supported 00:07:56.119 Copy (19h): Supported LBA-Change 00:07:56.119 Unknown (1Dh): Supported LBA-Change 00:07:56.119 00:07:56.119 Error Log 00:07:56.119 ========= 00:07:56.119 00:07:56.119 Arbitration 00:07:56.119 =========== 00:07:56.119 Arbitration Burst: no limit 00:07:56.119 00:07:56.119 Power Management 00:07:56.119 ================ 00:07:56.119 Number of Power States: 1 00:07:56.119 Current Power State: Power State #0 00:07:56.119 Power State #0: 00:07:56.119 Max Power: 25.00 W 00:07:56.119 Non-Operational State: Operational 00:07:56.119 Entry Latency: 16 microseconds 00:07:56.119 Exit Latency: 4 microseconds 00:07:56.119 Relative Read Throughput: 0 00:07:56.119 Relative Read Latency: 0 00:07:56.119 Relative Write Throughput: 0 00:07:56.119 Relative Write Latency: 0 00:07:56.119 Idle Power: Not Reported 00:07:56.119 Active Power: Not Reported 00:07:56.119 Non-Operational Permissive Mode: Not Supported 00:07:56.119 00:07:56.119 Health Information 00:07:56.119 ================== 00:07:56.119 Critical Warnings: 00:07:56.119 Available Spare Space: OK 00:07:56.119 Temperature: OK 00:07:56.119 Device Reliability: OK 00:07:56.119 Read Only: No 00:07:56.119 Volatile Memory Backup: OK 00:07:56.119 Current Temperature: 323 Kelvin (50 Celsius) 00:07:56.119 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:56.119 Available Spare: 0% 00:07:56.119 Available Spare Threshold: 0% 00:07:56.119 Life Percentage Used: 0% 00:07:56.119 Data Units Read: 1045 00:07:56.119 Data Units Written: 912 00:07:56.119 Host Read Commands: 54622 00:07:56.119 Host Write Commands: 53397 00:07:56.119 Controller Busy Time: 0 minutes 00:07:56.119 Power Cycles: 0 00:07:56.119 Power On Hours: 0 hours 00:07:56.119 Unsafe Shutdowns: 0 00:07:56.119 Unrecoverable Media Errors: 0 00:07:56.119 Lifetime Error Log Entries: 0 00:07:56.119 Warning Temperature Time: 0 minutes 00:07:56.119 Critical Temperature Time: 0 minutes 00:07:56.119 00:07:56.119 Number of Queues 00:07:56.119 ================ 00:07:56.119 Number of I/O Submission Queues: 64 00:07:56.119 Number of I/O Completion Queues: 64 00:07:56.119 00:07:56.119 ZNS Specific Controller Data 00:07:56.119 ============================ 00:07:56.119 Zone Append Size Limit: 0 00:07:56.119 00:07:56.119 00:07:56.119 Active Namespaces 00:07:56.119 ================= 00:07:56.119 Namespace ID:1 00:07:56.119 Error Recovery Timeout: Unlimited 00:07:56.119 Command Set Identifier: NVM (00h) 00:07:56.119 Deallocate: Supported 00:07:56.119 Deallocated/Unwritten Error: Supported 00:07:56.119 Deallocated Read Value: All 0x00 00:07:56.119 Deallocate in Write Zeroes: Not Supported 00:07:56.119 Deallocated Guard Field: 0xFFFF 00:07:56.119 Flush: Supported 00:07:56.119 Reservation: Not Supported 00:07:56.119 Namespace Sharing Capabilities: Private 00:07:56.119 Size (in LBAs): 1310720 (5GiB) 00:07:56.119 Capacity (in LBAs): 1310720 (5GiB) 00:07:56.119 Utilization (in LBAs): 1310720 (5GiB) 00:07:56.119 Thin Provisioning: Not Supported 00:07:56.119 Per-NS Atomic Units: No 00:07:56.119 Maximum Single Source Range Length: 128 00:07:56.119 Maximum Copy Length: 128 00:07:56.119 Maximum Source Range Count: 128 00:07:56.119 NGUID/EUI64 Never Reused: No 00:07:56.119 Namespace Write Protected: No 00:07:56.119 Number of LBA Formats: 8 00:07:56.119 Current LBA Format: LBA Format #04 00:07:56.119 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:56.119 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:56.119 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:56.119 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:56.119 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:56.119 LBA Forma[2024-11-17 23:13:19.798507] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 74367 terminated unexpected 00:07:56.119 t #05: Data Size: 4096 Metadata Size: 8 00:07:56.119 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:56.119 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:56.119 00:07:56.119 NVM Specific Namespace Data 00:07:56.119 =========================== 00:07:56.119 Logical Block Storage Tag Mask: 0 00:07:56.119 Protection Information Capabilities: 00:07:56.119 16b Guard Protection Information Storage Tag Support: No 00:07:56.119 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:56.119 Storage Tag Check Read Support: No 00:07:56.119 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.119 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.119 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.119 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.119 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.119 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.119 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.119 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.119 ===================================================== 00:07:56.119 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:56.119 ===================================================== 00:07:56.120 Controller Capabilities/Features 00:07:56.120 ================================ 00:07:56.120 Vendor ID: 1b36 00:07:56.120 Subsystem Vendor ID: 1af4 00:07:56.120 Serial Number: 12342 00:07:56.120 Model Number: QEMU NVMe Ctrl 00:07:56.120 Firmware Version: 8.0.0 00:07:56.120 Recommended Arb Burst: 6 00:07:56.120 IEEE OUI Identifier: 00 54 52 00:07:56.120 Multi-path I/O 00:07:56.120 May have multiple subsystem ports: No 00:07:56.120 May have multiple controllers: No 00:07:56.120 Associated with SR-IOV VF: No 00:07:56.120 Max Data Transfer Size: 524288 00:07:56.120 Max Number of Namespaces: 256 00:07:56.120 Max Number of I/O Queues: 64 00:07:56.120 NVMe Specification Version (VS): 1.4 00:07:56.120 NVMe Specification Version (Identify): 1.4 00:07:56.120 Maximum Queue Entries: 2048 00:07:56.120 Contiguous Queues Required: Yes 00:07:56.120 Arbitration Mechanisms Supported 00:07:56.120 Weighted Round Robin: Not Supported 00:07:56.120 Vendor Specific: Not Supported 00:07:56.120 Reset Timeout: 7500 ms 00:07:56.120 Doorbell Stride: 4 bytes 00:07:56.120 NVM Subsystem Reset: Not Supported 00:07:56.120 Command Sets Supported 00:07:56.120 NVM Command Set: Supported 00:07:56.120 Boot Partition: Not Supported 00:07:56.120 Memory Page Size Minimum: 4096 bytes 00:07:56.120 Memory Page Size Maximum: 65536 bytes 00:07:56.120 Persistent Memory Region: Not Supported 00:07:56.120 Optional Asynchronous Events Supported 00:07:56.120 Namespace Attribute Notices: Supported 00:07:56.120 Firmware Activation Notices: Not Supported 00:07:56.120 ANA Change Notices: Not Supported 00:07:56.120 PLE Aggregate Log Change Notices: Not Supported 00:07:56.120 LBA Status Info Alert Notices: Not Supported 00:07:56.120 EGE Aggregate Log Change Notices: Not Supported 00:07:56.120 Normal NVM Subsystem Shutdown event: Not Supported 00:07:56.120 Zone Descriptor Change Notices: Not Supported 00:07:56.120 Discovery Log Change Notices: Not Supported 00:07:56.120 Controller Attributes 00:07:56.120 128-bit Host Identifier: Not Supported 00:07:56.120 Non-Operational Permissive Mode: Not Supported 00:07:56.120 NVM Sets: Not Supported 00:07:56.120 Read Recovery Levels: Not Supported 00:07:56.120 Endurance Groups: Not Supported 00:07:56.120 Predictable Latency Mode: Not Supported 00:07:56.120 Traffic Based Keep ALive: Not Supported 00:07:56.120 Namespace Granularity: Not Supported 00:07:56.120 SQ Associations: Not Supported 00:07:56.120 UUID List: Not Supported 00:07:56.120 Multi-Domain Subsystem: Not Supported 00:07:56.120 Fixed Capacity Management: Not Supported 00:07:56.120 Variable Capacity Management: Not Supported 00:07:56.120 Delete Endurance Group: Not Supported 00:07:56.120 Delete NVM Set: Not Supported 00:07:56.120 Extended LBA Formats Supported: Supported 00:07:56.120 Flexible Data Placement Supported: Not Supported 00:07:56.120 00:07:56.120 Controller Memory Buffer Support 00:07:56.120 ================================ 00:07:56.120 Supported: No 00:07:56.120 00:07:56.120 Persistent Memory Region Support 00:07:56.120 ================================ 00:07:56.120 Supported: No 00:07:56.120 00:07:56.120 Admin Command Set Attributes 00:07:56.120 ============================ 00:07:56.120 Security Send/Receive: Not Supported 00:07:56.120 Format NVM: Supported 00:07:56.120 Firmware Activate/Download: Not Supported 00:07:56.120 Namespace Management: Supported 00:07:56.120 Device Self-Test: Not Supported 00:07:56.120 Directives: Supported 00:07:56.120 NVMe-MI: Not Supported 00:07:56.120 Virtualization Management: Not Supported 00:07:56.120 Doorbell Buffer Config: Supported 00:07:56.120 Get LBA Status Capability: Not Supported 00:07:56.120 Command & Feature Lockdown Capability: Not Supported 00:07:56.120 Abort Command Limit: 4 00:07:56.120 Async Event Request Limit: 4 00:07:56.120 Number of Firmware Slots: N/A 00:07:56.120 Firmware Slot 1 Read-Only: N/A 00:07:56.120 Firmware Activation Without Reset: N/A 00:07:56.120 Multiple Update Detection Support: N/A 00:07:56.120 Firmware Update Granularity: No Information Provided 00:07:56.120 Per-Namespace SMART Log: Yes 00:07:56.120 Asymmetric Namespace Access Log Page: Not Supported 00:07:56.120 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:56.120 Command Effects Log Page: Supported 00:07:56.120 Get Log Page Extended Data: Supported 00:07:56.120 Telemetry Log Pages: Not Supported 00:07:56.120 Persistent Event Log Pages: Not Supported 00:07:56.121 Supported Log Pages Log Page: May Support 00:07:56.121 Commands Supported & Effects Log Page: Not Supported 00:07:56.121 Feature Identifiers & Effects Log Page:May Support 00:07:56.121 NVMe-MI Commands & Effects Log Page: May Support 00:07:56.121 Data Area 4 for Telemetry Log: Not Supported 00:07:56.121 Error Log Page Entries Supported: 1 00:07:56.121 Keep Alive: Not Supported 00:07:56.121 00:07:56.121 NVM Command Set Attributes 00:07:56.121 ========================== 00:07:56.121 Submission Queue Entry Size 00:07:56.121 Max: 64 00:07:56.121 Min: 64 00:07:56.121 Completion Queue Entry Size 00:07:56.121 Max: 16 00:07:56.121 Min: 16 00:07:56.121 Number of Namespaces: 256 00:07:56.121 Compare Command: Supported 00:07:56.121 Write Uncorrectable Command: Not Supported 00:07:56.121 Dataset Management Command: Supported 00:07:56.121 Write Zeroes Command: Supported 00:07:56.121 Set Features Save Field: Supported 00:07:56.121 Reservations: Not Supported 00:07:56.121 Timestamp: Supported 00:07:56.121 Copy: Supported 00:07:56.121 Volatile Write Cache: Present 00:07:56.121 Atomic Write Unit (Normal): 1 00:07:56.121 Atomic Write Unit (PFail): 1 00:07:56.121 Atomic Compare & Write Unit: 1 00:07:56.121 Fused Compare & Write: Not Supported 00:07:56.121 Scatter-Gather List 00:07:56.121 SGL Command Set: Supported 00:07:56.121 SGL Keyed: Not Supported 00:07:56.121 SGL Bit Bucket Descriptor: Not Supported 00:07:56.121 SGL Metadata Pointer: Not Supported 00:07:56.121 Oversized SGL: Not Supported 00:07:56.121 SGL Metadata Address: Not Supported 00:07:56.121 SGL Offset: Not Supported 00:07:56.121 Transport SGL Data Block: Not Supported 00:07:56.121 Replay Protected Memory Block: Not Supported 00:07:56.121 00:07:56.121 Firmware Slot Information 00:07:56.121 ========================= 00:07:56.121 Active slot: 1 00:07:56.121 Slot 1 Firmware Revision: 1.0 00:07:56.121 00:07:56.121 00:07:56.121 Commands Supported and Effects 00:07:56.121 ============================== 00:07:56.121 Admin Commands 00:07:56.121 -------------- 00:07:56.121 Delete I/O Submission Queue (00h): Supported 00:07:56.121 Create I/O Submission Queue (01h): Supported 00:07:56.121 Get Log Page (02h): Supported 00:07:56.121 Delete I/O Completion Queue (04h): Supported 00:07:56.121 Create I/O Completion Queue (05h): Supported 00:07:56.121 Identify (06h): Supported 00:07:56.121 Abort (08h): Supported 00:07:56.121 Set Features (09h): Supported 00:07:56.121 Get Features (0Ah): Supported 00:07:56.121 Asynchronous Event Request (0Ch): Supported 00:07:56.121 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:56.121 Directive Send (19h): Supported 00:07:56.121 Directive Receive (1Ah): Supported 00:07:56.121 Virtualization Management (1Ch): Supported 00:07:56.121 Doorbell Buffer Config (7Ch): Supported 00:07:56.121 Format NVM (80h): Supported LBA-Change 00:07:56.121 I/O Commands 00:07:56.121 ------------ 00:07:56.121 Flush (00h): Supported LBA-Change 00:07:56.121 Write (01h): Supported LBA-Change 00:07:56.121 Read (02h): Supported 00:07:56.121 Compare (05h): Supported 00:07:56.121 Write Zeroes (08h): Supported LBA-Change 00:07:56.121 Dataset Management (09h): Supported LBA-Change 00:07:56.121 Unknown (0Ch): Supported 00:07:56.121 Unknown (12h): Supported 00:07:56.121 Copy (19h): Supported LBA-Change 00:07:56.121 Unknown (1Dh): Supported LBA-Change 00:07:56.121 00:07:56.121 Error Log 00:07:56.121 ========= 00:07:56.121 00:07:56.121 Arbitration 00:07:56.121 =========== 00:07:56.121 Arbitration Burst: no limit 00:07:56.121 00:07:56.121 Power Management 00:07:56.121 ================ 00:07:56.121 Number of Power States: 1 00:07:56.121 Current Power State: Power State #0 00:07:56.121 Power State #0: 00:07:56.121 Max Power: 25.00 W 00:07:56.121 Non-Operational State: Operational 00:07:56.121 Entry Latency: 16 microseconds 00:07:56.121 Exit Latency: 4 microseconds 00:07:56.121 Relative Read Throughput: 0 00:07:56.121 Relative Read Latency: 0 00:07:56.121 Relative Write Throughput: 0 00:07:56.121 Relative Write Latency: 0 00:07:56.121 Idle Power: Not Reported 00:07:56.121 Active Power: Not Reported 00:07:56.121 Non-Operational Permissive Mode: Not Supported 00:07:56.121 00:07:56.121 Health Information 00:07:56.121 ================== 00:07:56.121 Critical Warnings: 00:07:56.121 Available Spare Space: OK 00:07:56.121 Temperature: OK 00:07:56.121 Device Reliability: OK 00:07:56.121 Read Only: No 00:07:56.121 Volatile Memory Backup: OK 00:07:56.121 Current Temperature: 323 Kelvin (50 Celsius) 00:07:56.121 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:56.121 Available Spare: 0% 00:07:56.121 Available Spare Threshold: 0% 00:07:56.121 Life Percentage Used: 0% 00:07:56.121 Data Units Read: 2207 00:07:56.121 Data Units Written: 1994 00:07:56.121 Host Read Commands: 112401 00:07:56.121 Host Write Commands: 110670 00:07:56.121 Controller Busy Time: 0 minutes 00:07:56.121 Power Cycles: 0 00:07:56.121 Power On Hours: 0 hours 00:07:56.121 Unsafe Shutdowns: 0 00:07:56.121 Unrecoverable Media Errors: 0 00:07:56.121 Lifetime Error Log Entries: 0 00:07:56.121 Warning Temperature Time: 0 minutes 00:07:56.121 Critical Temperature Time: 0 minutes 00:07:56.121 00:07:56.121 Number of Queues 00:07:56.121 ================ 00:07:56.121 Number of I/O Submission Queues: 64 00:07:56.121 Number of I/O Completion Queues: 64 00:07:56.121 00:07:56.121 ZNS Specific Controller Data 00:07:56.121 ============================ 00:07:56.122 Zone Append Size Limit: 0 00:07:56.122 00:07:56.122 00:07:56.122 Active Namespaces 00:07:56.122 ================= 00:07:56.122 Namespace ID:1 00:07:56.122 Error Recovery Timeout: Unlimited 00:07:56.122 Command Set Identifier: NVM (00h) 00:07:56.122 Deallocate: Supported 00:07:56.122 Deallocated/Unwritten Error: Supported 00:07:56.122 Deallocated Read Value: All 0x00 00:07:56.122 Deallocate in Write Zeroes: Not Supported 00:07:56.122 Deallocated Guard Field: 0xFFFF 00:07:56.122 Flush: Supported 00:07:56.122 Reservation: Not Supported 00:07:56.122 Namespace Sharing Capabilities: Private 00:07:56.122 Size (in LBAs): 1048576 (4GiB) 00:07:56.122 Capacity (in LBAs): 1048576 (4GiB) 00:07:56.122 Utilization (in LBAs): 1048576 (4GiB) 00:07:56.122 Thin Provisioning: Not Supported 00:07:56.122 Per-NS Atomic Units: No 00:07:56.122 Maximum Single Source Range Length: 128 00:07:56.122 Maximum Copy Length: 128 00:07:56.122 Maximum Source Range Count: 128 00:07:56.122 NGUID/EUI64 Never Reused: No 00:07:56.122 Namespace Write Protected: No 00:07:56.122 Number of LBA Formats: 8 00:07:56.122 Current LBA Format: LBA Format #04 00:07:56.122 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:56.122 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:56.122 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:56.122 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:56.122 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:56.122 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:56.122 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:56.122 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:56.122 00:07:56.122 NVM Specific Namespace Data 00:07:56.122 =========================== 00:07:56.122 Logical Block Storage Tag Mask: 0 00:07:56.122 Protection Information Capabilities: 00:07:56.122 16b Guard Protection Information Storage Tag Support: No 00:07:56.122 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:56.122 Storage Tag Check Read Support: No 00:07:56.122 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.122 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.122 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.122 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.122 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.122 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.122 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.122 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.122 Namespace ID:2 00:07:56.122 Error Recovery Timeout: Unlimited 00:07:56.122 Command Set Identifier: NVM (00h) 00:07:56.122 Deallocate: Supported 00:07:56.122 Deallocated/Unwritten Error: Supported 00:07:56.122 Deallocated Read Value: All 0x00 00:07:56.122 Deallocate in Write Zeroes: Not Supported 00:07:56.122 Deallocated Guard Field: 0xFFFF 00:07:56.122 Flush: Supported 00:07:56.122 Reservation: Not Supported 00:07:56.122 Namespace Sharing Capabilities: Private 00:07:56.122 Size (in LBAs): 1048576 (4GiB) 00:07:56.122 Capacity (in LBAs): 1048576 (4GiB) 00:07:56.122 Utilization (in LBAs): 1048576 (4GiB) 00:07:56.122 Thin Provisioning: Not Supported 00:07:56.122 Per-NS Atomic Units: No 00:07:56.122 Maximum Single Source Range Length: 128 00:07:56.122 Maximum Copy Length: 128 00:07:56.122 Maximum Source Range Count: 128 00:07:56.122 NGUID/EUI64 Never Reused: No 00:07:56.122 Namespace Write Protected: No 00:07:56.122 Number of LBA Formats: 8 00:07:56.122 Current LBA Format: LBA Format #04 00:07:56.122 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:56.122 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:56.122 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:56.122 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:56.122 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:56.122 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:56.122 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:56.122 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:56.122 00:07:56.122 NVM Specific Namespace Data 00:07:56.122 =========================== 00:07:56.122 Logical Block Storage Tag Mask: 0 00:07:56.122 Protection Information Capabilities: 00:07:56.122 16b Guard Protection Information Storage Tag Support: No 00:07:56.122 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:56.122 Storage Tag Check Read Support: No 00:07:56.122 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.122 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.122 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.122 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.122 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.122 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.122 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.122 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.122 Namespace ID:3 00:07:56.122 Error Recovery Timeout: Unlimited 00:07:56.122 Command Set Identifier: NVM (00h) 00:07:56.122 Deallocate: Supported 00:07:56.122 Deallocated/Unwritten Error: Supported 00:07:56.122 Deallocated Read Value: All 0x00 00:07:56.122 Deallocate in Write Zeroes: Not Supported 00:07:56.122 Deallocated Guard Field: 0xFFFF 00:07:56.122 Flush: Supported 00:07:56.122 Reservation: Not Supported 00:07:56.122 Namespace Sharing Capabilities: Private 00:07:56.122 Size (in LBAs): 1048576 (4GiB) 00:07:56.122 Capacity (in LBAs): 1048576 (4GiB) 00:07:56.122 Utilization (in LBAs): 1048576 (4GiB) 00:07:56.122 Thin Provisioning: Not Supported 00:07:56.122 Per-NS Atomic Units: No 00:07:56.122 Maximum Single Source Range Length: 128 00:07:56.122 Maximum Copy Length: 128 00:07:56.123 Maximum Source Range Count: 128 00:07:56.123 NGUID/EUI64 Never Reused: No 00:07:56.123 Namespace Write Protected: No 00:07:56.123 Number of LBA Formats: 8 00:07:56.123 Current LBA Format: LBA Format #04 00:07:56.123 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:56.123 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:56.123 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:56.123 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:56.123 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:56.123 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:56.123 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:56.123 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:56.123 00:07:56.123 NVM Specific Namespace Data 00:07:56.123 =========================== 00:07:56.123 Logical Block Storage Tag Mask: 0 00:07:56.123 Protection Information Capabilities: 00:07:56.123 16b Guard Protection Information Storage Tag Support: No 00:07:56.123 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:56.123 Storage Tag Check Read Support: No 00:07:56.123 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.123 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.123 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.123 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.123 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.123 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.123 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.123 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.123 23:13:19 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:56.123 23:13:19 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:56.418 ===================================================== 00:07:56.418 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:56.418 ===================================================== 00:07:56.418 Controller Capabilities/Features 00:07:56.418 ================================ 00:07:56.418 Vendor ID: 1b36 00:07:56.418 Subsystem Vendor ID: 1af4 00:07:56.418 Serial Number: 12340 00:07:56.418 Model Number: QEMU NVMe Ctrl 00:07:56.418 Firmware Version: 8.0.0 00:07:56.418 Recommended Arb Burst: 6 00:07:56.418 IEEE OUI Identifier: 00 54 52 00:07:56.418 Multi-path I/O 00:07:56.418 May have multiple subsystem ports: No 00:07:56.418 May have multiple controllers: No 00:07:56.418 Associated with SR-IOV VF: No 00:07:56.418 Max Data Transfer Size: 524288 00:07:56.418 Max Number of Namespaces: 256 00:07:56.418 Max Number of I/O Queues: 64 00:07:56.418 NVMe Specification Version (VS): 1.4 00:07:56.418 NVMe Specification Version (Identify): 1.4 00:07:56.418 Maximum Queue Entries: 2048 00:07:56.418 Contiguous Queues Required: Yes 00:07:56.418 Arbitration Mechanisms Supported 00:07:56.418 Weighted Round Robin: Not Supported 00:07:56.418 Vendor Specific: Not Supported 00:07:56.418 Reset Timeout: 7500 ms 00:07:56.418 Doorbell Stride: 4 bytes 00:07:56.418 NVM Subsystem Reset: Not Supported 00:07:56.418 Command Sets Supported 00:07:56.418 NVM Command Set: Supported 00:07:56.418 Boot Partition: Not Supported 00:07:56.418 Memory Page Size Minimum: 4096 bytes 00:07:56.418 Memory Page Size Maximum: 65536 bytes 00:07:56.418 Persistent Memory Region: Not Supported 00:07:56.418 Optional Asynchronous Events Supported 00:07:56.418 Namespace Attribute Notices: Supported 00:07:56.418 Firmware Activation Notices: Not Supported 00:07:56.418 ANA Change Notices: Not Supported 00:07:56.418 PLE Aggregate Log Change Notices: Not Supported 00:07:56.418 LBA Status Info Alert Notices: Not Supported 00:07:56.418 EGE Aggregate Log Change Notices: Not Supported 00:07:56.418 Normal NVM Subsystem Shutdown event: Not Supported 00:07:56.418 Zone Descriptor Change Notices: Not Supported 00:07:56.418 Discovery Log Change Notices: Not Supported 00:07:56.418 Controller Attributes 00:07:56.418 128-bit Host Identifier: Not Supported 00:07:56.418 Non-Operational Permissive Mode: Not Supported 00:07:56.418 NVM Sets: Not Supported 00:07:56.418 Read Recovery Levels: Not Supported 00:07:56.418 Endurance Groups: Not Supported 00:07:56.418 Predictable Latency Mode: Not Supported 00:07:56.418 Traffic Based Keep ALive: Not Supported 00:07:56.418 Namespace Granularity: Not Supported 00:07:56.418 SQ Associations: Not Supported 00:07:56.418 UUID List: Not Supported 00:07:56.418 Multi-Domain Subsystem: Not Supported 00:07:56.418 Fixed Capacity Management: Not Supported 00:07:56.418 Variable Capacity Management: Not Supported 00:07:56.418 Delete Endurance Group: Not Supported 00:07:56.418 Delete NVM Set: Not Supported 00:07:56.418 Extended LBA Formats Supported: Supported 00:07:56.418 Flexible Data Placement Supported: Not Supported 00:07:56.418 00:07:56.418 Controller Memory Buffer Support 00:07:56.418 ================================ 00:07:56.418 Supported: No 00:07:56.418 00:07:56.418 Persistent Memory Region Support 00:07:56.418 ================================ 00:07:56.418 Supported: No 00:07:56.418 00:07:56.418 Admin Command Set Attributes 00:07:56.419 ============================ 00:07:56.419 Security Send/Receive: Not Supported 00:07:56.419 Format NVM: Supported 00:07:56.419 Firmware Activate/Download: Not Supported 00:07:56.419 Namespace Management: Supported 00:07:56.419 Device Self-Test: Not Supported 00:07:56.419 Directives: Supported 00:07:56.419 NVMe-MI: Not Supported 00:07:56.419 Virtualization Management: Not Supported 00:07:56.419 Doorbell Buffer Config: Supported 00:07:56.419 Get LBA Status Capability: Not Supported 00:07:56.419 Command & Feature Lockdown Capability: Not Supported 00:07:56.419 Abort Command Limit: 4 00:07:56.419 Async Event Request Limit: 4 00:07:56.419 Number of Firmware Slots: N/A 00:07:56.419 Firmware Slot 1 Read-Only: N/A 00:07:56.419 Firmware Activation Without Reset: N/A 00:07:56.419 Multiple Update Detection Support: N/A 00:07:56.419 Firmware Update Granularity: No Information Provided 00:07:56.419 Per-Namespace SMART Log: Yes 00:07:56.419 Asymmetric Namespace Access Log Page: Not Supported 00:07:56.419 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:56.419 Command Effects Log Page: Supported 00:07:56.419 Get Log Page Extended Data: Supported 00:07:56.419 Telemetry Log Pages: Not Supported 00:07:56.419 Persistent Event Log Pages: Not Supported 00:07:56.419 Supported Log Pages Log Page: May Support 00:07:56.419 Commands Supported & Effects Log Page: Not Supported 00:07:56.419 Feature Identifiers & Effects Log Page:May Support 00:07:56.419 NVMe-MI Commands & Effects Log Page: May Support 00:07:56.419 Data Area 4 for Telemetry Log: Not Supported 00:07:56.419 Error Log Page Entries Supported: 1 00:07:56.419 Keep Alive: Not Supported 00:07:56.419 00:07:56.419 NVM Command Set Attributes 00:07:56.419 ========================== 00:07:56.419 Submission Queue Entry Size 00:07:56.419 Max: 64 00:07:56.419 Min: 64 00:07:56.419 Completion Queue Entry Size 00:07:56.419 Max: 16 00:07:56.419 Min: 16 00:07:56.419 Number of Namespaces: 256 00:07:56.419 Compare Command: Supported 00:07:56.419 Write Uncorrectable Command: Not Supported 00:07:56.419 Dataset Management Command: Supported 00:07:56.419 Write Zeroes Command: Supported 00:07:56.419 Set Features Save Field: Supported 00:07:56.419 Reservations: Not Supported 00:07:56.419 Timestamp: Supported 00:07:56.419 Copy: Supported 00:07:56.419 Volatile Write Cache: Present 00:07:56.419 Atomic Write Unit (Normal): 1 00:07:56.419 Atomic Write Unit (PFail): 1 00:07:56.419 Atomic Compare & Write Unit: 1 00:07:56.419 Fused Compare & Write: Not Supported 00:07:56.419 Scatter-Gather List 00:07:56.419 SGL Command Set: Supported 00:07:56.419 SGL Keyed: Not Supported 00:07:56.419 SGL Bit Bucket Descriptor: Not Supported 00:07:56.419 SGL Metadata Pointer: Not Supported 00:07:56.419 Oversized SGL: Not Supported 00:07:56.419 SGL Metadata Address: Not Supported 00:07:56.419 SGL Offset: Not Supported 00:07:56.419 Transport SGL Data Block: Not Supported 00:07:56.419 Replay Protected Memory Block: Not Supported 00:07:56.419 00:07:56.419 Firmware Slot Information 00:07:56.419 ========================= 00:07:56.419 Active slot: 1 00:07:56.419 Slot 1 Firmware Revision: 1.0 00:07:56.419 00:07:56.419 00:07:56.419 Commands Supported and Effects 00:07:56.419 ============================== 00:07:56.419 Admin Commands 00:07:56.419 -------------- 00:07:56.419 Delete I/O Submission Queue (00h): Supported 00:07:56.419 Create I/O Submission Queue (01h): Supported 00:07:56.419 Get Log Page (02h): Supported 00:07:56.419 Delete I/O Completion Queue (04h): Supported 00:07:56.419 Create I/O Completion Queue (05h): Supported 00:07:56.419 Identify (06h): Supported 00:07:56.419 Abort (08h): Supported 00:07:56.419 Set Features (09h): Supported 00:07:56.419 Get Features (0Ah): Supported 00:07:56.419 Asynchronous Event Request (0Ch): Supported 00:07:56.419 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:56.419 Directive Send (19h): Supported 00:07:56.419 Directive Receive (1Ah): Supported 00:07:56.419 Virtualization Management (1Ch): Supported 00:07:56.419 Doorbell Buffer Config (7Ch): Supported 00:07:56.419 Format NVM (80h): Supported LBA-Change 00:07:56.419 I/O Commands 00:07:56.419 ------------ 00:07:56.419 Flush (00h): Supported LBA-Change 00:07:56.419 Write (01h): Supported LBA-Change 00:07:56.419 Read (02h): Supported 00:07:56.419 Compare (05h): Supported 00:07:56.419 Write Zeroes (08h): Supported LBA-Change 00:07:56.419 Dataset Management (09h): Supported LBA-Change 00:07:56.419 Unknown (0Ch): Supported 00:07:56.419 Unknown (12h): Supported 00:07:56.419 Copy (19h): Supported LBA-Change 00:07:56.419 Unknown (1Dh): Supported LBA-Change 00:07:56.419 00:07:56.419 Error Log 00:07:56.419 ========= 00:07:56.419 00:07:56.419 Arbitration 00:07:56.419 =========== 00:07:56.419 Arbitration Burst: no limit 00:07:56.419 00:07:56.419 Power Management 00:07:56.419 ================ 00:07:56.419 Number of Power States: 1 00:07:56.419 Current Power State: Power State #0 00:07:56.419 Power State #0: 00:07:56.419 Max Power: 25.00 W 00:07:56.419 Non-Operational State: Operational 00:07:56.419 Entry Latency: 16 microseconds 00:07:56.419 Exit Latency: 4 microseconds 00:07:56.419 Relative Read Throughput: 0 00:07:56.419 Relative Read Latency: 0 00:07:56.419 Relative Write Throughput: 0 00:07:56.419 Relative Write Latency: 0 00:07:56.419 Idle Power: Not Reported 00:07:56.419 Active Power: Not Reported 00:07:56.419 Non-Operational Permissive Mode: Not Supported 00:07:56.419 00:07:56.419 Health Information 00:07:56.419 ================== 00:07:56.419 Critical Warnings: 00:07:56.419 Available Spare Space: OK 00:07:56.419 Temperature: OK 00:07:56.419 Device Reliability: OK 00:07:56.419 Read Only: No 00:07:56.419 Volatile Memory Backup: OK 00:07:56.419 Current Temperature: 323 Kelvin (50 Celsius) 00:07:56.419 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:56.419 Available Spare: 0% 00:07:56.419 Available Spare Threshold: 0% 00:07:56.419 Life Percentage Used: 0% 00:07:56.419 Data Units Read: 685 00:07:56.419 Data Units Written: 613 00:07:56.419 Host Read Commands: 36753 00:07:56.419 Host Write Commands: 36539 00:07:56.419 Controller Busy Time: 0 minutes 00:07:56.419 Power Cycles: 0 00:07:56.419 Power On Hours: 0 hours 00:07:56.419 Unsafe Shutdowns: 0 00:07:56.419 Unrecoverable Media Errors: 0 00:07:56.419 Lifetime Error Log Entries: 0 00:07:56.419 Warning Temperature Time: 0 minutes 00:07:56.419 Critical Temperature Time: 0 minutes 00:07:56.419 00:07:56.419 Number of Queues 00:07:56.419 ================ 00:07:56.419 Number of I/O Submission Queues: 64 00:07:56.419 Number of I/O Completion Queues: 64 00:07:56.419 00:07:56.419 ZNS Specific Controller Data 00:07:56.419 ============================ 00:07:56.419 Zone Append Size Limit: 0 00:07:56.419 00:07:56.419 00:07:56.419 Active Namespaces 00:07:56.419 ================= 00:07:56.419 Namespace ID:1 00:07:56.419 Error Recovery Timeout: Unlimited 00:07:56.419 Command Set Identifier: NVM (00h) 00:07:56.419 Deallocate: Supported 00:07:56.419 Deallocated/Unwritten Error: Supported 00:07:56.419 Deallocated Read Value: All 0x00 00:07:56.419 Deallocate in Write Zeroes: Not Supported 00:07:56.419 Deallocated Guard Field: 0xFFFF 00:07:56.419 Flush: Supported 00:07:56.420 Reservation: Not Supported 00:07:56.420 Metadata Transferred as: Separate Metadata Buffer 00:07:56.420 Namespace Sharing Capabilities: Private 00:07:56.420 Size (in LBAs): 1548666 (5GiB) 00:07:56.420 Capacity (in LBAs): 1548666 (5GiB) 00:07:56.420 Utilization (in LBAs): 1548666 (5GiB) 00:07:56.420 Thin Provisioning: Not Supported 00:07:56.420 Per-NS Atomic Units: No 00:07:56.420 Maximum Single Source Range Length: 128 00:07:56.420 Maximum Copy Length: 128 00:07:56.420 Maximum Source Range Count: 128 00:07:56.420 NGUID/EUI64 Never Reused: No 00:07:56.420 Namespace Write Protected: No 00:07:56.420 Number of LBA Formats: 8 00:07:56.420 Current LBA Format: LBA Format #07 00:07:56.420 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:56.420 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:56.420 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:56.420 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:56.420 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:56.420 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:56.420 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:56.420 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:56.420 00:07:56.420 NVM Specific Namespace Data 00:07:56.420 =========================== 00:07:56.420 Logical Block Storage Tag Mask: 0 00:07:56.420 Protection Information Capabilities: 00:07:56.420 16b Guard Protection Information Storage Tag Support: No 00:07:56.420 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:56.420 Storage Tag Check Read Support: No 00:07:56.420 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.420 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.420 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.420 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.420 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.420 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.420 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.420 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.420 23:13:20 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:56.420 23:13:20 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:56.420 ===================================================== 00:07:56.420 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:56.420 ===================================================== 00:07:56.420 Controller Capabilities/Features 00:07:56.420 ================================ 00:07:56.420 Vendor ID: 1b36 00:07:56.420 Subsystem Vendor ID: 1af4 00:07:56.420 Serial Number: 12341 00:07:56.420 Model Number: QEMU NVMe Ctrl 00:07:56.420 Firmware Version: 8.0.0 00:07:56.420 Recommended Arb Burst: 6 00:07:56.420 IEEE OUI Identifier: 00 54 52 00:07:56.420 Multi-path I/O 00:07:56.420 May have multiple subsystem ports: No 00:07:56.420 May have multiple controllers: No 00:07:56.420 Associated with SR-IOV VF: No 00:07:56.420 Max Data Transfer Size: 524288 00:07:56.420 Max Number of Namespaces: 256 00:07:56.420 Max Number of I/O Queues: 64 00:07:56.420 NVMe Specification Version (VS): 1.4 00:07:56.420 NVMe Specification Version (Identify): 1.4 00:07:56.420 Maximum Queue Entries: 2048 00:07:56.420 Contiguous Queues Required: Yes 00:07:56.420 Arbitration Mechanisms Supported 00:07:56.420 Weighted Round Robin: Not Supported 00:07:56.420 Vendor Specific: Not Supported 00:07:56.420 Reset Timeout: 7500 ms 00:07:56.420 Doorbell Stride: 4 bytes 00:07:56.420 NVM Subsystem Reset: Not Supported 00:07:56.420 Command Sets Supported 00:07:56.420 NVM Command Set: Supported 00:07:56.420 Boot Partition: Not Supported 00:07:56.420 Memory Page Size Minimum: 4096 bytes 00:07:56.420 Memory Page Size Maximum: 65536 bytes 00:07:56.420 Persistent Memory Region: Not Supported 00:07:56.420 Optional Asynchronous Events Supported 00:07:56.420 Namespace Attribute Notices: Supported 00:07:56.420 Firmware Activation Notices: Not Supported 00:07:56.420 ANA Change Notices: Not Supported 00:07:56.420 PLE Aggregate Log Change Notices: Not Supported 00:07:56.420 LBA Status Info Alert Notices: Not Supported 00:07:56.420 EGE Aggregate Log Change Notices: Not Supported 00:07:56.420 Normal NVM Subsystem Shutdown event: Not Supported 00:07:56.420 Zone Descriptor Change Notices: Not Supported 00:07:56.420 Discovery Log Change Notices: Not Supported 00:07:56.420 Controller Attributes 00:07:56.420 128-bit Host Identifier: Not Supported 00:07:56.420 Non-Operational Permissive Mode: Not Supported 00:07:56.420 NVM Sets: Not Supported 00:07:56.420 Read Recovery Levels: Not Supported 00:07:56.420 Endurance Groups: Not Supported 00:07:56.420 Predictable Latency Mode: Not Supported 00:07:56.420 Traffic Based Keep ALive: Not Supported 00:07:56.420 Namespace Granularity: Not Supported 00:07:56.420 SQ Associations: Not Supported 00:07:56.420 UUID List: Not Supported 00:07:56.420 Multi-Domain Subsystem: Not Supported 00:07:56.420 Fixed Capacity Management: Not Supported 00:07:56.420 Variable Capacity Management: Not Supported 00:07:56.420 Delete Endurance Group: Not Supported 00:07:56.420 Delete NVM Set: Not Supported 00:07:56.420 Extended LBA Formats Supported: Supported 00:07:56.420 Flexible Data Placement Supported: Not Supported 00:07:56.420 00:07:56.420 Controller Memory Buffer Support 00:07:56.420 ================================ 00:07:56.420 Supported: No 00:07:56.420 00:07:56.420 Persistent Memory Region Support 00:07:56.420 ================================ 00:07:56.420 Supported: No 00:07:56.420 00:07:56.420 Admin Command Set Attributes 00:07:56.420 ============================ 00:07:56.420 Security Send/Receive: Not Supported 00:07:56.420 Format NVM: Supported 00:07:56.420 Firmware Activate/Download: Not Supported 00:07:56.420 Namespace Management: Supported 00:07:56.420 Device Self-Test: Not Supported 00:07:56.420 Directives: Supported 00:07:56.420 NVMe-MI: Not Supported 00:07:56.420 Virtualization Management: Not Supported 00:07:56.420 Doorbell Buffer Config: Supported 00:07:56.420 Get LBA Status Capability: Not Supported 00:07:56.420 Command & Feature Lockdown Capability: Not Supported 00:07:56.420 Abort Command Limit: 4 00:07:56.420 Async Event Request Limit: 4 00:07:56.420 Number of Firmware Slots: N/A 00:07:56.420 Firmware Slot 1 Read-Only: N/A 00:07:56.420 Firmware Activation Without Reset: N/A 00:07:56.420 Multiple Update Detection Support: N/A 00:07:56.420 Firmware Update Granularity: No Information Provided 00:07:56.420 Per-Namespace SMART Log: Yes 00:07:56.420 Asymmetric Namespace Access Log Page: Not Supported 00:07:56.420 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:56.420 Command Effects Log Page: Supported 00:07:56.420 Get Log Page Extended Data: Supported 00:07:56.420 Telemetry Log Pages: Not Supported 00:07:56.421 Persistent Event Log Pages: Not Supported 00:07:56.421 Supported Log Pages Log Page: May Support 00:07:56.421 Commands Supported & Effects Log Page: Not Supported 00:07:56.421 Feature Identifiers & Effects Log Page:May Support 00:07:56.421 NVMe-MI Commands & Effects Log Page: May Support 00:07:56.421 Data Area 4 for Telemetry Log: Not Supported 00:07:56.421 Error Log Page Entries Supported: 1 00:07:56.421 Keep Alive: Not Supported 00:07:56.421 00:07:56.421 NVM Command Set Attributes 00:07:56.421 ========================== 00:07:56.421 Submission Queue Entry Size 00:07:56.421 Max: 64 00:07:56.421 Min: 64 00:07:56.421 Completion Queue Entry Size 00:07:56.421 Max: 16 00:07:56.421 Min: 16 00:07:56.421 Number of Namespaces: 256 00:07:56.421 Compare Command: Supported 00:07:56.421 Write Uncorrectable Command: Not Supported 00:07:56.421 Dataset Management Command: Supported 00:07:56.421 Write Zeroes Command: Supported 00:07:56.421 Set Features Save Field: Supported 00:07:56.421 Reservations: Not Supported 00:07:56.421 Timestamp: Supported 00:07:56.421 Copy: Supported 00:07:56.421 Volatile Write Cache: Present 00:07:56.421 Atomic Write Unit (Normal): 1 00:07:56.421 Atomic Write Unit (PFail): 1 00:07:56.421 Atomic Compare & Write Unit: 1 00:07:56.421 Fused Compare & Write: Not Supported 00:07:56.421 Scatter-Gather List 00:07:56.421 SGL Command Set: Supported 00:07:56.421 SGL Keyed: Not Supported 00:07:56.421 SGL Bit Bucket Descriptor: Not Supported 00:07:56.421 SGL Metadata Pointer: Not Supported 00:07:56.421 Oversized SGL: Not Supported 00:07:56.421 SGL Metadata Address: Not Supported 00:07:56.421 SGL Offset: Not Supported 00:07:56.421 Transport SGL Data Block: Not Supported 00:07:56.421 Replay Protected Memory Block: Not Supported 00:07:56.421 00:07:56.421 Firmware Slot Information 00:07:56.421 ========================= 00:07:56.421 Active slot: 1 00:07:56.421 Slot 1 Firmware Revision: 1.0 00:07:56.421 00:07:56.421 00:07:56.421 Commands Supported and Effects 00:07:56.421 ============================== 00:07:56.421 Admin Commands 00:07:56.421 -------------- 00:07:56.421 Delete I/O Submission Queue (00h): Supported 00:07:56.421 Create I/O Submission Queue (01h): Supported 00:07:56.421 Get Log Page (02h): Supported 00:07:56.421 Delete I/O Completion Queue (04h): Supported 00:07:56.421 Create I/O Completion Queue (05h): Supported 00:07:56.421 Identify (06h): Supported 00:07:56.421 Abort (08h): Supported 00:07:56.421 Set Features (09h): Supported 00:07:56.421 Get Features (0Ah): Supported 00:07:56.421 Asynchronous Event Request (0Ch): Supported 00:07:56.421 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:56.421 Directive Send (19h): Supported 00:07:56.421 Directive Receive (1Ah): Supported 00:07:56.421 Virtualization Management (1Ch): Supported 00:07:56.421 Doorbell Buffer Config (7Ch): Supported 00:07:56.421 Format NVM (80h): Supported LBA-Change 00:07:56.421 I/O Commands 00:07:56.421 ------------ 00:07:56.421 Flush (00h): Supported LBA-Change 00:07:56.421 Write (01h): Supported LBA-Change 00:07:56.421 Read (02h): Supported 00:07:56.421 Compare (05h): Supported 00:07:56.421 Write Zeroes (08h): Supported LBA-Change 00:07:56.421 Dataset Management (09h): Supported LBA-Change 00:07:56.421 Unknown (0Ch): Supported 00:07:56.421 Unknown (12h): Supported 00:07:56.421 Copy (19h): Supported LBA-Change 00:07:56.421 Unknown (1Dh): Supported LBA-Change 00:07:56.421 00:07:56.421 Error Log 00:07:56.421 ========= 00:07:56.421 00:07:56.421 Arbitration 00:07:56.421 =========== 00:07:56.421 Arbitration Burst: no limit 00:07:56.421 00:07:56.421 Power Management 00:07:56.421 ================ 00:07:56.421 Number of Power States: 1 00:07:56.421 Current Power State: Power State #0 00:07:56.421 Power State #0: 00:07:56.421 Max Power: 25.00 W 00:07:56.421 Non-Operational State: Operational 00:07:56.421 Entry Latency: 16 microseconds 00:07:56.421 Exit Latency: 4 microseconds 00:07:56.421 Relative Read Throughput: 0 00:07:56.421 Relative Read Latency: 0 00:07:56.421 Relative Write Throughput: 0 00:07:56.421 Relative Write Latency: 0 00:07:56.421 Idle Power: Not Reported 00:07:56.421 Active Power: Not Reported 00:07:56.421 Non-Operational Permissive Mode: Not Supported 00:07:56.421 00:07:56.421 Health Information 00:07:56.421 ================== 00:07:56.421 Critical Warnings: 00:07:56.421 Available Spare Space: OK 00:07:56.421 Temperature: OK 00:07:56.421 Device Reliability: OK 00:07:56.421 Read Only: No 00:07:56.421 Volatile Memory Backup: OK 00:07:56.421 Current Temperature: 323 Kelvin (50 Celsius) 00:07:56.421 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:56.421 Available Spare: 0% 00:07:56.421 Available Spare Threshold: 0% 00:07:56.421 Life Percentage Used: 0% 00:07:56.421 Data Units Read: 1045 00:07:56.421 Data Units Written: 912 00:07:56.421 Host Read Commands: 54622 00:07:56.421 Host Write Commands: 53397 00:07:56.421 Controller Busy Time: 0 minutes 00:07:56.421 Power Cycles: 0 00:07:56.421 Power On Hours: 0 hours 00:07:56.421 Unsafe Shutdowns: 0 00:07:56.421 Unrecoverable Media Errors: 0 00:07:56.421 Lifetime Error Log Entries: 0 00:07:56.421 Warning Temperature Time: 0 minutes 00:07:56.421 Critical Temperature Time: 0 minutes 00:07:56.421 00:07:56.421 Number of Queues 00:07:56.421 ================ 00:07:56.421 Number of I/O Submission Queues: 64 00:07:56.421 Number of I/O Completion Queues: 64 00:07:56.421 00:07:56.421 ZNS Specific Controller Data 00:07:56.421 ============================ 00:07:56.421 Zone Append Size Limit: 0 00:07:56.421 00:07:56.421 00:07:56.421 Active Namespaces 00:07:56.421 ================= 00:07:56.421 Namespace ID:1 00:07:56.421 Error Recovery Timeout: Unlimited 00:07:56.421 Command Set Identifier: NVM (00h) 00:07:56.421 Deallocate: Supported 00:07:56.421 Deallocated/Unwritten Error: Supported 00:07:56.421 Deallocated Read Value: All 0x00 00:07:56.421 Deallocate in Write Zeroes: Not Supported 00:07:56.421 Deallocated Guard Field: 0xFFFF 00:07:56.421 Flush: Supported 00:07:56.421 Reservation: Not Supported 00:07:56.421 Namespace Sharing Capabilities: Private 00:07:56.421 Size (in LBAs): 1310720 (5GiB) 00:07:56.421 Capacity (in LBAs): 1310720 (5GiB) 00:07:56.421 Utilization (in LBAs): 1310720 (5GiB) 00:07:56.421 Thin Provisioning: Not Supported 00:07:56.421 Per-NS Atomic Units: No 00:07:56.421 Maximum Single Source Range Length: 128 00:07:56.421 Maximum Copy Length: 128 00:07:56.421 Maximum Source Range Count: 128 00:07:56.421 NGUID/EUI64 Never Reused: No 00:07:56.421 Namespace Write Protected: No 00:07:56.421 Number of LBA Formats: 8 00:07:56.421 Current LBA Format: LBA Format #04 00:07:56.421 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:56.421 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:56.421 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:56.421 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:56.422 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:56.422 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:56.422 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:56.422 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:56.422 00:07:56.422 NVM Specific Namespace Data 00:07:56.422 =========================== 00:07:56.422 Logical Block Storage Tag Mask: 0 00:07:56.422 Protection Information Capabilities: 00:07:56.422 16b Guard Protection Information Storage Tag Support: No 00:07:56.422 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:56.422 Storage Tag Check Read Support: No 00:07:56.422 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.422 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.422 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.422 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.422 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.422 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.422 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.422 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.684 23:13:20 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:56.684 23:13:20 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:56.684 ===================================================== 00:07:56.684 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:56.684 ===================================================== 00:07:56.684 Controller Capabilities/Features 00:07:56.684 ================================ 00:07:56.684 Vendor ID: 1b36 00:07:56.684 Subsystem Vendor ID: 1af4 00:07:56.684 Serial Number: 12342 00:07:56.684 Model Number: QEMU NVMe Ctrl 00:07:56.684 Firmware Version: 8.0.0 00:07:56.684 Recommended Arb Burst: 6 00:07:56.684 IEEE OUI Identifier: 00 54 52 00:07:56.684 Multi-path I/O 00:07:56.684 May have multiple subsystem ports: No 00:07:56.684 May have multiple controllers: No 00:07:56.684 Associated with SR-IOV VF: No 00:07:56.684 Max Data Transfer Size: 524288 00:07:56.684 Max Number of Namespaces: 256 00:07:56.684 Max Number of I/O Queues: 64 00:07:56.684 NVMe Specification Version (VS): 1.4 00:07:56.684 NVMe Specification Version (Identify): 1.4 00:07:56.684 Maximum Queue Entries: 2048 00:07:56.684 Contiguous Queues Required: Yes 00:07:56.684 Arbitration Mechanisms Supported 00:07:56.684 Weighted Round Robin: Not Supported 00:07:56.684 Vendor Specific: Not Supported 00:07:56.684 Reset Timeout: 7500 ms 00:07:56.684 Doorbell Stride: 4 bytes 00:07:56.684 NVM Subsystem Reset: Not Supported 00:07:56.684 Command Sets Supported 00:07:56.684 NVM Command Set: Supported 00:07:56.684 Boot Partition: Not Supported 00:07:56.684 Memory Page Size Minimum: 4096 bytes 00:07:56.684 Memory Page Size Maximum: 65536 bytes 00:07:56.684 Persistent Memory Region: Not Supported 00:07:56.684 Optional Asynchronous Events Supported 00:07:56.684 Namespace Attribute Notices: Supported 00:07:56.684 Firmware Activation Notices: Not Supported 00:07:56.684 ANA Change Notices: Not Supported 00:07:56.684 PLE Aggregate Log Change Notices: Not Supported 00:07:56.684 LBA Status Info Alert Notices: Not Supported 00:07:56.684 EGE Aggregate Log Change Notices: Not Supported 00:07:56.684 Normal NVM Subsystem Shutdown event: Not Supported 00:07:56.684 Zone Descriptor Change Notices: Not Supported 00:07:56.684 Discovery Log Change Notices: Not Supported 00:07:56.684 Controller Attributes 00:07:56.684 128-bit Host Identifier: Not Supported 00:07:56.684 Non-Operational Permissive Mode: Not Supported 00:07:56.684 NVM Sets: Not Supported 00:07:56.684 Read Recovery Levels: Not Supported 00:07:56.684 Endurance Groups: Not Supported 00:07:56.684 Predictable Latency Mode: Not Supported 00:07:56.684 Traffic Based Keep ALive: Not Supported 00:07:56.684 Namespace Granularity: Not Supported 00:07:56.684 SQ Associations: Not Supported 00:07:56.684 UUID List: Not Supported 00:07:56.684 Multi-Domain Subsystem: Not Supported 00:07:56.684 Fixed Capacity Management: Not Supported 00:07:56.684 Variable Capacity Management: Not Supported 00:07:56.684 Delete Endurance Group: Not Supported 00:07:56.684 Delete NVM Set: Not Supported 00:07:56.684 Extended LBA Formats Supported: Supported 00:07:56.684 Flexible Data Placement Supported: Not Supported 00:07:56.684 00:07:56.684 Controller Memory Buffer Support 00:07:56.684 ================================ 00:07:56.684 Supported: No 00:07:56.684 00:07:56.684 Persistent Memory Region Support 00:07:56.684 ================================ 00:07:56.684 Supported: No 00:07:56.684 00:07:56.684 Admin Command Set Attributes 00:07:56.684 ============================ 00:07:56.684 Security Send/Receive: Not Supported 00:07:56.684 Format NVM: Supported 00:07:56.684 Firmware Activate/Download: Not Supported 00:07:56.684 Namespace Management: Supported 00:07:56.684 Device Self-Test: Not Supported 00:07:56.684 Directives: Supported 00:07:56.684 NVMe-MI: Not Supported 00:07:56.684 Virtualization Management: Not Supported 00:07:56.684 Doorbell Buffer Config: Supported 00:07:56.684 Get LBA Status Capability: Not Supported 00:07:56.684 Command & Feature Lockdown Capability: Not Supported 00:07:56.684 Abort Command Limit: 4 00:07:56.684 Async Event Request Limit: 4 00:07:56.684 Number of Firmware Slots: N/A 00:07:56.684 Firmware Slot 1 Read-Only: N/A 00:07:56.684 Firmware Activation Without Reset: N/A 00:07:56.684 Multiple Update Detection Support: N/A 00:07:56.684 Firmware Update Granularity: No Information Provided 00:07:56.684 Per-Namespace SMART Log: Yes 00:07:56.684 Asymmetric Namespace Access Log Page: Not Supported 00:07:56.684 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:56.684 Command Effects Log Page: Supported 00:07:56.684 Get Log Page Extended Data: Supported 00:07:56.684 Telemetry Log Pages: Not Supported 00:07:56.684 Persistent Event Log Pages: Not Supported 00:07:56.684 Supported Log Pages Log Page: May Support 00:07:56.684 Commands Supported & Effects Log Page: Not Supported 00:07:56.684 Feature Identifiers & Effects Log Page:May Support 00:07:56.684 NVMe-MI Commands & Effects Log Page: May Support 00:07:56.684 Data Area 4 for Telemetry Log: Not Supported 00:07:56.684 Error Log Page Entries Supported: 1 00:07:56.684 Keep Alive: Not Supported 00:07:56.684 00:07:56.684 NVM Command Set Attributes 00:07:56.684 ========================== 00:07:56.684 Submission Queue Entry Size 00:07:56.684 Max: 64 00:07:56.684 Min: 64 00:07:56.684 Completion Queue Entry Size 00:07:56.684 Max: 16 00:07:56.684 Min: 16 00:07:56.684 Number of Namespaces: 256 00:07:56.684 Compare Command: Supported 00:07:56.684 Write Uncorrectable Command: Not Supported 00:07:56.684 Dataset Management Command: Supported 00:07:56.684 Write Zeroes Command: Supported 00:07:56.684 Set Features Save Field: Supported 00:07:56.684 Reservations: Not Supported 00:07:56.684 Timestamp: Supported 00:07:56.684 Copy: Supported 00:07:56.684 Volatile Write Cache: Present 00:07:56.684 Atomic Write Unit (Normal): 1 00:07:56.684 Atomic Write Unit (PFail): 1 00:07:56.685 Atomic Compare & Write Unit: 1 00:07:56.685 Fused Compare & Write: Not Supported 00:07:56.685 Scatter-Gather List 00:07:56.685 SGL Command Set: Supported 00:07:56.685 SGL Keyed: Not Supported 00:07:56.685 SGL Bit Bucket Descriptor: Not Supported 00:07:56.685 SGL Metadata Pointer: Not Supported 00:07:56.685 Oversized SGL: Not Supported 00:07:56.685 SGL Metadata Address: Not Supported 00:07:56.685 SGL Offset: Not Supported 00:07:56.685 Transport SGL Data Block: Not Supported 00:07:56.685 Replay Protected Memory Block: Not Supported 00:07:56.685 00:07:56.685 Firmware Slot Information 00:07:56.685 ========================= 00:07:56.685 Active slot: 1 00:07:56.685 Slot 1 Firmware Revision: 1.0 00:07:56.685 00:07:56.685 00:07:56.685 Commands Supported and Effects 00:07:56.685 ============================== 00:07:56.685 Admin Commands 00:07:56.685 -------------- 00:07:56.685 Delete I/O Submission Queue (00h): Supported 00:07:56.685 Create I/O Submission Queue (01h): Supported 00:07:56.685 Get Log Page (02h): Supported 00:07:56.685 Delete I/O Completion Queue (04h): Supported 00:07:56.685 Create I/O Completion Queue (05h): Supported 00:07:56.685 Identify (06h): Supported 00:07:56.685 Abort (08h): Supported 00:07:56.685 Set Features (09h): Supported 00:07:56.685 Get Features (0Ah): Supported 00:07:56.685 Asynchronous Event Request (0Ch): Supported 00:07:56.685 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:56.685 Directive Send (19h): Supported 00:07:56.685 Directive Receive (1Ah): Supported 00:07:56.685 Virtualization Management (1Ch): Supported 00:07:56.685 Doorbell Buffer Config (7Ch): Supported 00:07:56.685 Format NVM (80h): Supported LBA-Change 00:07:56.685 I/O Commands 00:07:56.685 ------------ 00:07:56.685 Flush (00h): Supported LBA-Change 00:07:56.685 Write (01h): Supported LBA-Change 00:07:56.685 Read (02h): Supported 00:07:56.685 Compare (05h): Supported 00:07:56.685 Write Zeroes (08h): Supported LBA-Change 00:07:56.685 Dataset Management (09h): Supported LBA-Change 00:07:56.685 Unknown (0Ch): Supported 00:07:56.685 Unknown (12h): Supported 00:07:56.685 Copy (19h): Supported LBA-Change 00:07:56.685 Unknown (1Dh): Supported LBA-Change 00:07:56.685 00:07:56.685 Error Log 00:07:56.685 ========= 00:07:56.685 00:07:56.685 Arbitration 00:07:56.685 =========== 00:07:56.685 Arbitration Burst: no limit 00:07:56.685 00:07:56.685 Power Management 00:07:56.685 ================ 00:07:56.685 Number of Power States: 1 00:07:56.685 Current Power State: Power State #0 00:07:56.685 Power State #0: 00:07:56.685 Max Power: 25.00 W 00:07:56.685 Non-Operational State: Operational 00:07:56.685 Entry Latency: 16 microseconds 00:07:56.685 Exit Latency: 4 microseconds 00:07:56.685 Relative Read Throughput: 0 00:07:56.685 Relative Read Latency: 0 00:07:56.685 Relative Write Throughput: 0 00:07:56.685 Relative Write Latency: 0 00:07:56.685 Idle Power: Not Reported 00:07:56.685 Active Power: Not Reported 00:07:56.685 Non-Operational Permissive Mode: Not Supported 00:07:56.685 00:07:56.685 Health Information 00:07:56.685 ================== 00:07:56.685 Critical Warnings: 00:07:56.685 Available Spare Space: OK 00:07:56.685 Temperature: OK 00:07:56.685 Device Reliability: OK 00:07:56.685 Read Only: No 00:07:56.685 Volatile Memory Backup: OK 00:07:56.685 Current Temperature: 323 Kelvin (50 Celsius) 00:07:56.685 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:56.685 Available Spare: 0% 00:07:56.685 Available Spare Threshold: 0% 00:07:56.685 Life Percentage Used: 0% 00:07:56.685 Data Units Read: 2207 00:07:56.685 Data Units Written: 1994 00:07:56.685 Host Read Commands: 112401 00:07:56.685 Host Write Commands: 110670 00:07:56.685 Controller Busy Time: 0 minutes 00:07:56.685 Power Cycles: 0 00:07:56.685 Power On Hours: 0 hours 00:07:56.685 Unsafe Shutdowns: 0 00:07:56.685 Unrecoverable Media Errors: 0 00:07:56.685 Lifetime Error Log Entries: 0 00:07:56.685 Warning Temperature Time: 0 minutes 00:07:56.685 Critical Temperature Time: 0 minutes 00:07:56.685 00:07:56.685 Number of Queues 00:07:56.685 ================ 00:07:56.685 Number of I/O Submission Queues: 64 00:07:56.685 Number of I/O Completion Queues: 64 00:07:56.685 00:07:56.685 ZNS Specific Controller Data 00:07:56.685 ============================ 00:07:56.685 Zone Append Size Limit: 0 00:07:56.685 00:07:56.685 00:07:56.685 Active Namespaces 00:07:56.685 ================= 00:07:56.685 Namespace ID:1 00:07:56.685 Error Recovery Timeout: Unlimited 00:07:56.685 Command Set Identifier: NVM (00h) 00:07:56.685 Deallocate: Supported 00:07:56.685 Deallocated/Unwritten Error: Supported 00:07:56.685 Deallocated Read Value: All 0x00 00:07:56.685 Deallocate in Write Zeroes: Not Supported 00:07:56.685 Deallocated Guard Field: 0xFFFF 00:07:56.685 Flush: Supported 00:07:56.685 Reservation: Not Supported 00:07:56.685 Namespace Sharing Capabilities: Private 00:07:56.685 Size (in LBAs): 1048576 (4GiB) 00:07:56.685 Capacity (in LBAs): 1048576 (4GiB) 00:07:56.685 Utilization (in LBAs): 1048576 (4GiB) 00:07:56.685 Thin Provisioning: Not Supported 00:07:56.685 Per-NS Atomic Units: No 00:07:56.685 Maximum Single Source Range Length: 128 00:07:56.685 Maximum Copy Length: 128 00:07:56.685 Maximum Source Range Count: 128 00:07:56.685 NGUID/EUI64 Never Reused: No 00:07:56.685 Namespace Write Protected: No 00:07:56.685 Number of LBA Formats: 8 00:07:56.685 Current LBA Format: LBA Format #04 00:07:56.685 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:56.685 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:56.685 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:56.685 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:56.685 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:56.685 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:56.685 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:56.685 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:56.685 00:07:56.685 NVM Specific Namespace Data 00:07:56.685 =========================== 00:07:56.685 Logical Block Storage Tag Mask: 0 00:07:56.685 Protection Information Capabilities: 00:07:56.685 16b Guard Protection Information Storage Tag Support: No 00:07:56.685 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:56.685 Storage Tag Check Read Support: No 00:07:56.685 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.685 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.685 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.685 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.685 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.685 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.685 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.685 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.685 Namespace ID:2 00:07:56.685 Error Recovery Timeout: Unlimited 00:07:56.685 Command Set Identifier: NVM (00h) 00:07:56.685 Deallocate: Supported 00:07:56.685 Deallocated/Unwritten Error: Supported 00:07:56.685 Deallocated Read Value: All 0x00 00:07:56.685 Deallocate in Write Zeroes: Not Supported 00:07:56.685 Deallocated Guard Field: 0xFFFF 00:07:56.685 Flush: Supported 00:07:56.685 Reservation: Not Supported 00:07:56.685 Namespace Sharing Capabilities: Private 00:07:56.685 Size (in LBAs): 1048576 (4GiB) 00:07:56.685 Capacity (in LBAs): 1048576 (4GiB) 00:07:56.685 Utilization (in LBAs): 1048576 (4GiB) 00:07:56.685 Thin Provisioning: Not Supported 00:07:56.685 Per-NS Atomic Units: No 00:07:56.685 Maximum Single Source Range Length: 128 00:07:56.685 Maximum Copy Length: 128 00:07:56.685 Maximum Source Range Count: 128 00:07:56.685 NGUID/EUI64 Never Reused: No 00:07:56.685 Namespace Write Protected: No 00:07:56.685 Number of LBA Formats: 8 00:07:56.685 Current LBA Format: LBA Format #04 00:07:56.685 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:56.685 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:56.685 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:56.685 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:56.685 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:56.685 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:56.685 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:56.685 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:56.685 00:07:56.685 NVM Specific Namespace Data 00:07:56.685 =========================== 00:07:56.685 Logical Block Storage Tag Mask: 0 00:07:56.685 Protection Information Capabilities: 00:07:56.685 16b Guard Protection Information Storage Tag Support: No 00:07:56.685 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:56.686 Storage Tag Check Read Support: No 00:07:56.686 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.686 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.686 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.686 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.686 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.686 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.686 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.686 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.686 Namespace ID:3 00:07:56.686 Error Recovery Timeout: Unlimited 00:07:56.686 Command Set Identifier: NVM (00h) 00:07:56.686 Deallocate: Supported 00:07:56.686 Deallocated/Unwritten Error: Supported 00:07:56.686 Deallocated Read Value: All 0x00 00:07:56.686 Deallocate in Write Zeroes: Not Supported 00:07:56.686 Deallocated Guard Field: 0xFFFF 00:07:56.686 Flush: Supported 00:07:56.686 Reservation: Not Supported 00:07:56.686 Namespace Sharing Capabilities: Private 00:07:56.686 Size (in LBAs): 1048576 (4GiB) 00:07:56.686 Capacity (in LBAs): 1048576 (4GiB) 00:07:56.686 Utilization (in LBAs): 1048576 (4GiB) 00:07:56.686 Thin Provisioning: Not Supported 00:07:56.686 Per-NS Atomic Units: No 00:07:56.686 Maximum Single Source Range Length: 128 00:07:56.686 Maximum Copy Length: 128 00:07:56.686 Maximum Source Range Count: 128 00:07:56.686 NGUID/EUI64 Never Reused: No 00:07:56.686 Namespace Write Protected: No 00:07:56.686 Number of LBA Formats: 8 00:07:56.686 Current LBA Format: LBA Format #04 00:07:56.686 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:56.686 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:56.686 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:56.686 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:56.686 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:56.686 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:56.686 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:56.686 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:56.686 00:07:56.686 NVM Specific Namespace Data 00:07:56.686 =========================== 00:07:56.686 Logical Block Storage Tag Mask: 0 00:07:56.686 Protection Information Capabilities: 00:07:56.686 16b Guard Protection Information Storage Tag Support: No 00:07:56.686 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:56.686 Storage Tag Check Read Support: No 00:07:56.686 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.686 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.686 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.686 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.686 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.686 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.686 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.686 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.686 23:13:20 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:56.686 23:13:20 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:56.948 ===================================================== 00:07:56.948 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:56.948 ===================================================== 00:07:56.948 Controller Capabilities/Features 00:07:56.948 ================================ 00:07:56.948 Vendor ID: 1b36 00:07:56.948 Subsystem Vendor ID: 1af4 00:07:56.948 Serial Number: 12343 00:07:56.948 Model Number: QEMU NVMe Ctrl 00:07:56.948 Firmware Version: 8.0.0 00:07:56.948 Recommended Arb Burst: 6 00:07:56.948 IEEE OUI Identifier: 00 54 52 00:07:56.948 Multi-path I/O 00:07:56.948 May have multiple subsystem ports: No 00:07:56.948 May have multiple controllers: Yes 00:07:56.948 Associated with SR-IOV VF: No 00:07:56.948 Max Data Transfer Size: 524288 00:07:56.948 Max Number of Namespaces: 256 00:07:56.948 Max Number of I/O Queues: 64 00:07:56.948 NVMe Specification Version (VS): 1.4 00:07:56.948 NVMe Specification Version (Identify): 1.4 00:07:56.948 Maximum Queue Entries: 2048 00:07:56.948 Contiguous Queues Required: Yes 00:07:56.948 Arbitration Mechanisms Supported 00:07:56.948 Weighted Round Robin: Not Supported 00:07:56.948 Vendor Specific: Not Supported 00:07:56.948 Reset Timeout: 7500 ms 00:07:56.948 Doorbell Stride: 4 bytes 00:07:56.948 NVM Subsystem Reset: Not Supported 00:07:56.948 Command Sets Supported 00:07:56.948 NVM Command Set: Supported 00:07:56.948 Boot Partition: Not Supported 00:07:56.948 Memory Page Size Minimum: 4096 bytes 00:07:56.948 Memory Page Size Maximum: 65536 bytes 00:07:56.948 Persistent Memory Region: Not Supported 00:07:56.948 Optional Asynchronous Events Supported 00:07:56.948 Namespace Attribute Notices: Supported 00:07:56.948 Firmware Activation Notices: Not Supported 00:07:56.948 ANA Change Notices: Not Supported 00:07:56.948 PLE Aggregate Log Change Notices: Not Supported 00:07:56.948 LBA Status Info Alert Notices: Not Supported 00:07:56.948 EGE Aggregate Log Change Notices: Not Supported 00:07:56.948 Normal NVM Subsystem Shutdown event: Not Supported 00:07:56.948 Zone Descriptor Change Notices: Not Supported 00:07:56.948 Discovery Log Change Notices: Not Supported 00:07:56.948 Controller Attributes 00:07:56.948 128-bit Host Identifier: Not Supported 00:07:56.948 Non-Operational Permissive Mode: Not Supported 00:07:56.948 NVM Sets: Not Supported 00:07:56.948 Read Recovery Levels: Not Supported 00:07:56.948 Endurance Groups: Supported 00:07:56.948 Predictable Latency Mode: Not Supported 00:07:56.948 Traffic Based Keep ALive: Not Supported 00:07:56.948 Namespace Granularity: Not Supported 00:07:56.948 SQ Associations: Not Supported 00:07:56.948 UUID List: Not Supported 00:07:56.948 Multi-Domain Subsystem: Not Supported 00:07:56.948 Fixed Capacity Management: Not Supported 00:07:56.948 Variable Capacity Management: Not Supported 00:07:56.948 Delete Endurance Group: Not Supported 00:07:56.948 Delete NVM Set: Not Supported 00:07:56.948 Extended LBA Formats Supported: Supported 00:07:56.948 Flexible Data Placement Supported: Supported 00:07:56.948 00:07:56.948 Controller Memory Buffer Support 00:07:56.948 ================================ 00:07:56.948 Supported: No 00:07:56.948 00:07:56.948 Persistent Memory Region Support 00:07:56.948 ================================ 00:07:56.948 Supported: No 00:07:56.948 00:07:56.948 Admin Command Set Attributes 00:07:56.948 ============================ 00:07:56.948 Security Send/Receive: Not Supported 00:07:56.948 Format NVM: Supported 00:07:56.948 Firmware Activate/Download: Not Supported 00:07:56.948 Namespace Management: Supported 00:07:56.948 Device Self-Test: Not Supported 00:07:56.948 Directives: Supported 00:07:56.948 NVMe-MI: Not Supported 00:07:56.948 Virtualization Management: Not Supported 00:07:56.948 Doorbell Buffer Config: Supported 00:07:56.948 Get LBA Status Capability: Not Supported 00:07:56.948 Command & Feature Lockdown Capability: Not Supported 00:07:56.948 Abort Command Limit: 4 00:07:56.948 Async Event Request Limit: 4 00:07:56.948 Number of Firmware Slots: N/A 00:07:56.948 Firmware Slot 1 Read-Only: N/A 00:07:56.948 Firmware Activation Without Reset: N/A 00:07:56.948 Multiple Update Detection Support: N/A 00:07:56.948 Firmware Update Granularity: No Information Provided 00:07:56.948 Per-Namespace SMART Log: Yes 00:07:56.948 Asymmetric Namespace Access Log Page: Not Supported 00:07:56.948 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:56.948 Command Effects Log Page: Supported 00:07:56.948 Get Log Page Extended Data: Supported 00:07:56.948 Telemetry Log Pages: Not Supported 00:07:56.948 Persistent Event Log Pages: Not Supported 00:07:56.948 Supported Log Pages Log Page: May Support 00:07:56.948 Commands Supported & Effects Log Page: Not Supported 00:07:56.948 Feature Identifiers & Effects Log Page:May Support 00:07:56.948 NVMe-MI Commands & Effects Log Page: May Support 00:07:56.948 Data Area 4 for Telemetry Log: Not Supported 00:07:56.948 Error Log Page Entries Supported: 1 00:07:56.948 Keep Alive: Not Supported 00:07:56.948 00:07:56.948 NVM Command Set Attributes 00:07:56.948 ========================== 00:07:56.948 Submission Queue Entry Size 00:07:56.948 Max: 64 00:07:56.948 Min: 64 00:07:56.948 Completion Queue Entry Size 00:07:56.948 Max: 16 00:07:56.948 Min: 16 00:07:56.948 Number of Namespaces: 256 00:07:56.948 Compare Command: Supported 00:07:56.948 Write Uncorrectable Command: Not Supported 00:07:56.948 Dataset Management Command: Supported 00:07:56.948 Write Zeroes Command: Supported 00:07:56.948 Set Features Save Field: Supported 00:07:56.948 Reservations: Not Supported 00:07:56.948 Timestamp: Supported 00:07:56.948 Copy: Supported 00:07:56.948 Volatile Write Cache: Present 00:07:56.948 Atomic Write Unit (Normal): 1 00:07:56.948 Atomic Write Unit (PFail): 1 00:07:56.948 Atomic Compare & Write Unit: 1 00:07:56.948 Fused Compare & Write: Not Supported 00:07:56.948 Scatter-Gather List 00:07:56.948 SGL Command Set: Supported 00:07:56.949 SGL Keyed: Not Supported 00:07:56.949 SGL Bit Bucket Descriptor: Not Supported 00:07:56.949 SGL Metadata Pointer: Not Supported 00:07:56.949 Oversized SGL: Not Supported 00:07:56.949 SGL Metadata Address: Not Supported 00:07:56.949 SGL Offset: Not Supported 00:07:56.949 Transport SGL Data Block: Not Supported 00:07:56.949 Replay Protected Memory Block: Not Supported 00:07:56.949 00:07:56.949 Firmware Slot Information 00:07:56.949 ========================= 00:07:56.949 Active slot: 1 00:07:56.949 Slot 1 Firmware Revision: 1.0 00:07:56.949 00:07:56.949 00:07:56.949 Commands Supported and Effects 00:07:56.949 ============================== 00:07:56.949 Admin Commands 00:07:56.949 -------------- 00:07:56.949 Delete I/O Submission Queue (00h): Supported 00:07:56.949 Create I/O Submission Queue (01h): Supported 00:07:56.949 Get Log Page (02h): Supported 00:07:56.949 Delete I/O Completion Queue (04h): Supported 00:07:56.949 Create I/O Completion Queue (05h): Supported 00:07:56.949 Identify (06h): Supported 00:07:56.949 Abort (08h): Supported 00:07:56.949 Set Features (09h): Supported 00:07:56.949 Get Features (0Ah): Supported 00:07:56.949 Asynchronous Event Request (0Ch): Supported 00:07:56.949 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:56.949 Directive Send (19h): Supported 00:07:56.949 Directive Receive (1Ah): Supported 00:07:56.949 Virtualization Management (1Ch): Supported 00:07:56.949 Doorbell Buffer Config (7Ch): Supported 00:07:56.949 Format NVM (80h): Supported LBA-Change 00:07:56.949 I/O Commands 00:07:56.949 ------------ 00:07:56.949 Flush (00h): Supported LBA-Change 00:07:56.949 Write (01h): Supported LBA-Change 00:07:56.949 Read (02h): Supported 00:07:56.949 Compare (05h): Supported 00:07:56.949 Write Zeroes (08h): Supported LBA-Change 00:07:56.949 Dataset Management (09h): Supported LBA-Change 00:07:56.949 Unknown (0Ch): Supported 00:07:56.949 Unknown (12h): Supported 00:07:56.949 Copy (19h): Supported LBA-Change 00:07:56.949 Unknown (1Dh): Supported LBA-Change 00:07:56.949 00:07:56.949 Error Log 00:07:56.949 ========= 00:07:56.949 00:07:56.949 Arbitration 00:07:56.949 =========== 00:07:56.949 Arbitration Burst: no limit 00:07:56.949 00:07:56.949 Power Management 00:07:56.949 ================ 00:07:56.949 Number of Power States: 1 00:07:56.949 Current Power State: Power State #0 00:07:56.949 Power State #0: 00:07:56.949 Max Power: 25.00 W 00:07:56.949 Non-Operational State: Operational 00:07:56.949 Entry Latency: 16 microseconds 00:07:56.949 Exit Latency: 4 microseconds 00:07:56.949 Relative Read Throughput: 0 00:07:56.949 Relative Read Latency: 0 00:07:56.949 Relative Write Throughput: 0 00:07:56.949 Relative Write Latency: 0 00:07:56.949 Idle Power: Not Reported 00:07:56.949 Active Power: Not Reported 00:07:56.949 Non-Operational Permissive Mode: Not Supported 00:07:56.949 00:07:56.949 Health Information 00:07:56.949 ================== 00:07:56.949 Critical Warnings: 00:07:56.949 Available Spare Space: OK 00:07:56.949 Temperature: OK 00:07:56.949 Device Reliability: OK 00:07:56.949 Read Only: No 00:07:56.949 Volatile Memory Backup: OK 00:07:56.949 Current Temperature: 323 Kelvin (50 Celsius) 00:07:56.949 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:56.949 Available Spare: 0% 00:07:56.949 Available Spare Threshold: 0% 00:07:56.949 Life Percentage Used: 0% 00:07:56.949 Data Units Read: 916 00:07:56.949 Data Units Written: 845 00:07:56.949 Host Read Commands: 39033 00:07:56.949 Host Write Commands: 38456 00:07:56.949 Controller Busy Time: 0 minutes 00:07:56.949 Power Cycles: 0 00:07:56.949 Power On Hours: 0 hours 00:07:56.949 Unsafe Shutdowns: 0 00:07:56.949 Unrecoverable Media Errors: 0 00:07:56.949 Lifetime Error Log Entries: 0 00:07:56.949 Warning Temperature Time: 0 minutes 00:07:56.949 Critical Temperature Time: 0 minutes 00:07:56.949 00:07:56.949 Number of Queues 00:07:56.949 ================ 00:07:56.949 Number of I/O Submission Queues: 64 00:07:56.949 Number of I/O Completion Queues: 64 00:07:56.949 00:07:56.949 ZNS Specific Controller Data 00:07:56.949 ============================ 00:07:56.949 Zone Append Size Limit: 0 00:07:56.949 00:07:56.949 00:07:56.949 Active Namespaces 00:07:56.949 ================= 00:07:56.949 Namespace ID:1 00:07:56.949 Error Recovery Timeout: Unlimited 00:07:56.949 Command Set Identifier: NVM (00h) 00:07:56.949 Deallocate: Supported 00:07:56.949 Deallocated/Unwritten Error: Supported 00:07:56.949 Deallocated Read Value: All 0x00 00:07:56.949 Deallocate in Write Zeroes: Not Supported 00:07:56.949 Deallocated Guard Field: 0xFFFF 00:07:56.949 Flush: Supported 00:07:56.949 Reservation: Not Supported 00:07:56.949 Namespace Sharing Capabilities: Multiple Controllers 00:07:56.949 Size (in LBAs): 262144 (1GiB) 00:07:56.949 Capacity (in LBAs): 262144 (1GiB) 00:07:56.949 Utilization (in LBAs): 262144 (1GiB) 00:07:56.949 Thin Provisioning: Not Supported 00:07:56.949 Per-NS Atomic Units: No 00:07:56.949 Maximum Single Source Range Length: 128 00:07:56.949 Maximum Copy Length: 128 00:07:56.949 Maximum Source Range Count: 128 00:07:56.949 NGUID/EUI64 Never Reused: No 00:07:56.949 Namespace Write Protected: No 00:07:56.949 Endurance group ID: 1 00:07:56.949 Number of LBA Formats: 8 00:07:56.949 Current LBA Format: LBA Format #04 00:07:56.949 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:56.949 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:56.949 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:56.949 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:56.949 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:56.949 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:56.949 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:56.949 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:56.949 00:07:56.949 Get Feature FDP: 00:07:56.949 ================ 00:07:56.949 Enabled: Yes 00:07:56.949 FDP configuration index: 0 00:07:56.949 00:07:56.949 FDP configurations log page 00:07:56.949 =========================== 00:07:56.949 Number of FDP configurations: 1 00:07:56.949 Version: 0 00:07:56.949 Size: 112 00:07:56.949 FDP Configuration Descriptor: 0 00:07:56.949 Descriptor Size: 96 00:07:56.949 Reclaim Group Identifier format: 2 00:07:56.949 FDP Volatile Write Cache: Not Present 00:07:56.949 FDP Configuration: Valid 00:07:56.949 Vendor Specific Size: 0 00:07:56.949 Number of Reclaim Groups: 2 00:07:56.949 Number of Recalim Unit Handles: 8 00:07:56.949 Max Placement Identifiers: 128 00:07:56.949 Number of Namespaces Suppprted: 256 00:07:56.949 Reclaim unit Nominal Size: 6000000 bytes 00:07:56.949 Estimated Reclaim Unit Time Limit: Not Reported 00:07:56.949 RUH Desc #000: RUH Type: Initially Isolated 00:07:56.949 RUH Desc #001: RUH Type: Initially Isolated 00:07:56.949 RUH Desc #002: RUH Type: Initially Isolated 00:07:56.949 RUH Desc #003: RUH Type: Initially Isolated 00:07:56.949 RUH Desc #004: RUH Type: Initially Isolated 00:07:56.949 RUH Desc #005: RUH Type: Initially Isolated 00:07:56.949 RUH Desc #006: RUH Type: Initially Isolated 00:07:56.949 RUH Desc #007: RUH Type: Initially Isolated 00:07:56.949 00:07:56.949 FDP reclaim unit handle usage log page 00:07:56.949 ====================================== 00:07:56.949 Number of Reclaim Unit Handles: 8 00:07:56.949 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:56.949 RUH Usage Desc #001: RUH Attributes: Unused 00:07:56.949 RUH Usage Desc #002: RUH Attributes: Unused 00:07:56.949 RUH Usage Desc #003: RUH Attributes: Unused 00:07:56.949 RUH Usage Desc #004: RUH Attributes: Unused 00:07:56.949 RUH Usage Desc #005: RUH Attributes: Unused 00:07:56.949 RUH Usage Desc #006: RUH Attributes: Unused 00:07:56.949 RUH Usage Desc #007: RUH Attributes: Unused 00:07:56.949 00:07:56.949 FDP statistics log page 00:07:56.949 ======================= 00:07:56.949 Host bytes with metadata written: 509452288 00:07:56.949 Media bytes with metadata written: 509509632 00:07:56.949 Media bytes erased: 0 00:07:56.949 00:07:56.949 FDP events log page 00:07:56.949 =================== 00:07:56.949 Number of FDP events: 0 00:07:56.949 00:07:56.949 NVM Specific Namespace Data 00:07:56.949 =========================== 00:07:56.949 Logical Block Storage Tag Mask: 0 00:07:56.949 Protection Information Capabilities: 00:07:56.949 16b Guard Protection Information Storage Tag Support: No 00:07:56.949 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:56.949 Storage Tag Check Read Support: No 00:07:56.949 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.949 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.950 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.950 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.950 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.950 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.950 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.950 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.950 00:07:56.950 real 0m1.078s 00:07:56.950 user 0m0.398s 00:07:56.950 sys 0m0.470s 00:07:56.950 23:13:20 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.950 23:13:20 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:56.950 ************************************ 00:07:56.950 END TEST nvme_identify 00:07:56.950 ************************************ 00:07:56.950 23:13:20 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:56.950 23:13:20 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:56.950 23:13:20 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.950 23:13:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:56.950 ************************************ 00:07:56.950 START TEST nvme_perf 00:07:56.950 ************************************ 00:07:56.950 23:13:20 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:56.950 23:13:20 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:58.337 Initializing NVMe Controllers 00:07:58.337 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:58.337 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:58.337 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:58.337 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:58.337 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:58.337 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:58.337 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:58.338 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:58.338 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:58.338 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:58.338 Initialization complete. Launching workers. 00:07:58.338 ======================================================== 00:07:58.338 Latency(us) 00:07:58.338 Device Information : IOPS MiB/s Average min max 00:07:58.338 PCIE (0000:00:13.0) NSID 1 from core 0: 7297.53 85.52 17557.12 13853.85 40124.03 00:07:58.338 PCIE (0000:00:10.0) NSID 1 from core 0: 7297.53 85.52 17540.15 13131.34 39413.95 00:07:58.338 PCIE (0000:00:11.0) NSID 1 from core 0: 7297.53 85.52 17517.42 11810.40 38363.21 00:07:58.338 PCIE (0000:00:12.0) NSID 1 from core 0: 7297.53 85.52 17492.61 8711.09 38776.98 00:07:58.338 PCIE (0000:00:12.0) NSID 2 from core 0: 7297.53 85.52 17467.77 7434.88 38067.94 00:07:58.338 PCIE (0000:00:12.0) NSID 3 from core 0: 7360.99 86.26 17292.93 6292.91 30015.32 00:07:58.338 ======================================================== 00:07:58.338 Total : 43848.66 513.85 17477.73 6292.91 40124.03 00:07:58.338 00:07:58.338 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:58.338 ================================================================================= 00:07:58.338 1.00000% : 14518.745us 00:07:58.338 10.00000% : 15426.166us 00:07:58.338 25.00000% : 16131.938us 00:07:58.338 50.00000% : 17140.185us 00:07:58.338 75.00000% : 18450.905us 00:07:58.338 90.00000% : 19559.975us 00:07:58.338 95.00000% : 20769.871us 00:07:58.338 98.00000% : 22483.889us 00:07:58.338 99.00000% : 31255.631us 00:07:58.338 99.50000% : 39321.600us 00:07:58.338 99.90000% : 40128.197us 00:07:58.338 99.99000% : 40128.197us 00:07:58.338 99.99900% : 40128.197us 00:07:58.338 99.99990% : 40128.197us 00:07:58.338 99.99999% : 40128.197us 00:07:58.338 00:07:58.338 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:58.338 ================================================================================= 00:07:58.338 1.00000% : 14115.446us 00:07:58.338 10.00000% : 15325.342us 00:07:58.338 25.00000% : 16031.114us 00:07:58.338 50.00000% : 17140.185us 00:07:58.338 75.00000% : 18450.905us 00:07:58.338 90.00000% : 19660.800us 00:07:58.338 95.00000% : 20870.695us 00:07:58.338 98.00000% : 22786.363us 00:07:58.338 99.00000% : 30650.683us 00:07:58.338 99.50000% : 38716.652us 00:07:58.338 99.90000% : 39321.600us 00:07:58.338 99.99000% : 39523.249us 00:07:58.338 99.99900% : 39523.249us 00:07:58.338 99.99990% : 39523.249us 00:07:58.338 99.99999% : 39523.249us 00:07:58.338 00:07:58.338 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:58.338 ================================================================================= 00:07:58.338 1.00000% : 14216.271us 00:07:58.338 10.00000% : 15426.166us 00:07:58.338 25.00000% : 16131.938us 00:07:58.338 50.00000% : 17039.360us 00:07:58.338 75.00000% : 18450.905us 00:07:58.338 90.00000% : 19660.800us 00:07:58.338 95.00000% : 20568.222us 00:07:58.338 98.00000% : 22988.012us 00:07:58.338 99.00000% : 29642.437us 00:07:58.338 99.50000% : 37708.406us 00:07:58.338 99.90000% : 38313.354us 00:07:58.338 99.99000% : 38515.003us 00:07:58.338 99.99900% : 38515.003us 00:07:58.338 99.99990% : 38515.003us 00:07:58.338 99.99999% : 38515.003us 00:07:58.338 00:07:58.338 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:58.338 ================================================================================= 00:07:58.338 1.00000% : 13913.797us 00:07:58.338 10.00000% : 15426.166us 00:07:58.338 25.00000% : 16232.763us 00:07:58.338 50.00000% : 17241.009us 00:07:58.338 75.00000% : 18249.255us 00:07:58.338 90.00000% : 19660.800us 00:07:58.338 95.00000% : 20870.695us 00:07:58.338 98.00000% : 22584.714us 00:07:58.338 99.00000% : 30045.735us 00:07:58.338 99.50000% : 38111.705us 00:07:58.338 99.90000% : 38716.652us 00:07:58.338 99.99000% : 38918.302us 00:07:58.338 99.99900% : 38918.302us 00:07:58.338 99.99990% : 38918.302us 00:07:58.338 99.99999% : 38918.302us 00:07:58.338 00:07:58.338 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:58.338 ================================================================================= 00:07:58.338 1.00000% : 13712.148us 00:07:58.338 10.00000% : 15426.166us 00:07:58.338 25.00000% : 16232.763us 00:07:58.338 50.00000% : 17140.185us 00:07:58.338 75.00000% : 18249.255us 00:07:58.338 90.00000% : 19761.625us 00:07:58.338 95.00000% : 20870.695us 00:07:58.338 98.00000% : 22887.188us 00:07:58.338 99.00000% : 29844.086us 00:07:58.338 99.50000% : 37305.108us 00:07:58.338 99.90000% : 37910.055us 00:07:58.338 99.99000% : 38111.705us 00:07:58.338 99.99900% : 38111.705us 00:07:58.338 99.99990% : 38111.705us 00:07:58.338 99.99999% : 38111.705us 00:07:58.338 00:07:58.338 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:58.338 ================================================================================= 00:07:58.338 1.00000% : 14014.622us 00:07:58.338 10.00000% : 15426.166us 00:07:58.338 25.00000% : 16131.938us 00:07:58.338 50.00000% : 17039.360us 00:07:58.338 75.00000% : 18350.080us 00:07:58.338 90.00000% : 19559.975us 00:07:58.338 95.00000% : 20669.046us 00:07:58.338 98.00000% : 22080.591us 00:07:58.338 99.00000% : 22786.363us 00:07:58.338 99.50000% : 29239.138us 00:07:58.338 99.90000% : 30045.735us 00:07:58.338 99.99000% : 30045.735us 00:07:58.338 99.99900% : 30045.735us 00:07:58.338 99.99990% : 30045.735us 00:07:58.338 99.99999% : 30045.735us 00:07:58.338 00:07:58.338 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:58.338 ============================================================================== 00:07:58.338 Range in us Cumulative IO count 00:07:58.338 13812.972 - 13913.797: 0.0543% ( 4) 00:07:58.338 13913.797 - 14014.622: 0.2446% ( 14) 00:07:58.338 14014.622 - 14115.446: 0.3940% ( 11) 00:07:58.338 14115.446 - 14216.271: 0.5027% ( 8) 00:07:58.338 14216.271 - 14317.095: 0.6658% ( 12) 00:07:58.338 14317.095 - 14417.920: 0.8696% ( 15) 00:07:58.338 14417.920 - 14518.745: 1.1005% ( 17) 00:07:58.338 14518.745 - 14619.569: 1.4810% ( 28) 00:07:58.338 14619.569 - 14720.394: 2.1603% ( 50) 00:07:58.338 14720.394 - 14821.218: 3.2065% ( 77) 00:07:58.338 14821.218 - 14922.043: 4.2391% ( 76) 00:07:58.338 14922.043 - 15022.868: 5.3533% ( 82) 00:07:58.338 15022.868 - 15123.692: 6.6576% ( 96) 00:07:58.338 15123.692 - 15224.517: 7.9212% ( 93) 00:07:58.338 15224.517 - 15325.342: 9.4158% ( 110) 00:07:58.338 15325.342 - 15426.166: 10.9918% ( 116) 00:07:58.338 15426.166 - 15526.991: 12.6902% ( 125) 00:07:58.338 15526.991 - 15627.815: 14.5109% ( 134) 00:07:58.338 15627.815 - 15728.640: 16.6984% ( 161) 00:07:58.338 15728.640 - 15829.465: 19.4158% ( 200) 00:07:58.338 15829.465 - 15930.289: 21.8478% ( 179) 00:07:58.338 15930.289 - 16031.114: 24.2120% ( 174) 00:07:58.338 16031.114 - 16131.938: 26.5761% ( 174) 00:07:58.338 16131.938 - 16232.763: 29.1848% ( 192) 00:07:58.338 16232.763 - 16333.588: 31.6304% ( 180) 00:07:58.338 16333.588 - 16434.412: 34.2935% ( 196) 00:07:58.338 16434.412 - 16535.237: 36.7935% ( 184) 00:07:58.338 16535.237 - 16636.062: 39.4565% ( 196) 00:07:58.338 16636.062 - 16736.886: 42.1332% ( 197) 00:07:58.338 16736.886 - 16837.711: 44.8777% ( 202) 00:07:58.338 16837.711 - 16938.535: 47.4592% ( 190) 00:07:58.338 16938.535 - 17039.360: 49.8234% ( 174) 00:07:58.338 17039.360 - 17140.185: 51.9293% ( 155) 00:07:58.338 17140.185 - 17241.009: 53.9810% ( 151) 00:07:58.338 17241.009 - 17341.834: 56.0054% ( 149) 00:07:58.338 17341.834 - 17442.658: 57.9891% ( 146) 00:07:58.338 17442.658 - 17543.483: 59.9457% ( 144) 00:07:58.338 17543.483 - 17644.308: 61.7935% ( 136) 00:07:58.338 17644.308 - 17745.132: 63.6549% ( 137) 00:07:58.338 17745.132 - 17845.957: 65.6522% ( 147) 00:07:58.338 17845.957 - 17946.782: 67.4728% ( 134) 00:07:58.338 17946.782 - 18047.606: 69.1304% ( 122) 00:07:58.338 18047.606 - 18148.431: 70.8560% ( 127) 00:07:58.338 18148.431 - 18249.255: 72.7310% ( 138) 00:07:58.338 18249.255 - 18350.080: 74.3478% ( 119) 00:07:58.338 18350.080 - 18450.905: 76.0870% ( 128) 00:07:58.338 18450.905 - 18551.729: 77.9755% ( 139) 00:07:58.338 18551.729 - 18652.554: 79.6196% ( 121) 00:07:58.338 18652.554 - 18753.378: 81.0734% ( 107) 00:07:58.338 18753.378 - 18854.203: 82.5951% ( 112) 00:07:58.338 18854.203 - 18955.028: 84.0082% ( 104) 00:07:58.338 18955.028 - 19055.852: 85.2717% ( 93) 00:07:58.338 19055.852 - 19156.677: 86.4402% ( 86) 00:07:58.338 19156.677 - 19257.502: 87.6087% ( 86) 00:07:58.338 19257.502 - 19358.326: 88.7092% ( 81) 00:07:58.338 19358.326 - 19459.151: 89.6603% ( 70) 00:07:58.338 19459.151 - 19559.975: 90.4891% ( 61) 00:07:58.338 19559.975 - 19660.800: 91.2500% ( 56) 00:07:58.338 19660.800 - 19761.625: 91.8342% ( 43) 00:07:58.338 19761.625 - 19862.449: 92.3505% ( 38) 00:07:58.338 19862.449 - 19963.274: 92.7174% ( 27) 00:07:58.338 19963.274 - 20064.098: 93.1114% ( 29) 00:07:58.338 20064.098 - 20164.923: 93.5054% ( 29) 00:07:58.338 20164.923 - 20265.748: 93.8723% ( 27) 00:07:58.338 20265.748 - 20366.572: 94.2527% ( 28) 00:07:58.338 20366.572 - 20467.397: 94.5924% ( 25) 00:07:58.338 20467.397 - 20568.222: 94.7283% ( 10) 00:07:58.338 20568.222 - 20669.046: 94.8370% ( 8) 00:07:58.338 20669.046 - 20769.871: 95.0272% ( 14) 00:07:58.338 20769.871 - 20870.695: 95.2853% ( 19) 00:07:58.338 20870.695 - 20971.520: 95.5707% ( 21) 00:07:58.338 20971.520 - 21072.345: 95.8696% ( 22) 00:07:58.339 21072.345 - 21173.169: 96.1413% ( 20) 00:07:58.339 21173.169 - 21273.994: 96.3043% ( 12) 00:07:58.339 21273.994 - 21374.818: 96.4402% ( 10) 00:07:58.339 21374.818 - 21475.643: 96.5761% ( 10) 00:07:58.339 21475.643 - 21576.468: 96.6984% ( 9) 00:07:58.339 21576.468 - 21677.292: 96.9565% ( 19) 00:07:58.339 21677.292 - 21778.117: 97.1467% ( 14) 00:07:58.339 21778.117 - 21878.942: 97.3370% ( 14) 00:07:58.339 21878.942 - 21979.766: 97.5408% ( 15) 00:07:58.339 21979.766 - 22080.591: 97.7038% ( 12) 00:07:58.339 22080.591 - 22181.415: 97.8261% ( 9) 00:07:58.339 22181.415 - 22282.240: 97.8940% ( 5) 00:07:58.339 22282.240 - 22383.065: 97.9620% ( 5) 00:07:58.339 22383.065 - 22483.889: 98.0299% ( 5) 00:07:58.339 22483.889 - 22584.714: 98.0978% ( 5) 00:07:58.339 22584.714 - 22685.538: 98.1658% ( 5) 00:07:58.339 22685.538 - 22786.363: 98.2337% ( 5) 00:07:58.339 22786.363 - 22887.188: 98.2609% ( 2) 00:07:58.339 29844.086 - 30045.735: 98.3696% ( 8) 00:07:58.339 30045.735 - 30247.385: 98.4783% ( 8) 00:07:58.339 30247.385 - 30449.034: 98.6141% ( 10) 00:07:58.339 30449.034 - 30650.683: 98.7500% ( 10) 00:07:58.339 30650.683 - 30852.332: 98.8587% ( 8) 00:07:58.339 30852.332 - 31053.982: 98.9674% ( 8) 00:07:58.339 31053.982 - 31255.631: 99.0761% ( 8) 00:07:58.339 31255.631 - 31457.280: 99.1304% ( 4) 00:07:58.339 38515.003 - 38716.652: 99.1712% ( 3) 00:07:58.339 38716.652 - 38918.302: 99.2799% ( 8) 00:07:58.339 38918.302 - 39119.951: 99.4022% ( 9) 00:07:58.339 39119.951 - 39321.600: 99.5245% ( 9) 00:07:58.339 39321.600 - 39523.249: 99.6467% ( 9) 00:07:58.339 39523.249 - 39724.898: 99.7690% ( 9) 00:07:58.339 39724.898 - 39926.548: 99.8913% ( 9) 00:07:58.339 39926.548 - 40128.197: 100.0000% ( 8) 00:07:58.339 00:07:58.339 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:58.339 ============================================================================== 00:07:58.339 Range in us Cumulative IO count 00:07:58.339 13107.200 - 13208.025: 0.0543% ( 4) 00:07:58.339 13208.025 - 13308.849: 0.1359% ( 6) 00:07:58.339 13308.849 - 13409.674: 0.1766% ( 3) 00:07:58.339 13409.674 - 13510.498: 0.2717% ( 7) 00:07:58.339 13510.498 - 13611.323: 0.3533% ( 6) 00:07:58.339 13611.323 - 13712.148: 0.4348% ( 6) 00:07:58.339 13712.148 - 13812.972: 0.5163% ( 6) 00:07:58.339 13812.972 - 13913.797: 0.6658% ( 11) 00:07:58.339 13913.797 - 14014.622: 0.8832% ( 16) 00:07:58.339 14014.622 - 14115.446: 1.0326% ( 11) 00:07:58.339 14115.446 - 14216.271: 1.2772% ( 18) 00:07:58.339 14216.271 - 14317.095: 1.4946% ( 16) 00:07:58.339 14317.095 - 14417.920: 1.7255% ( 17) 00:07:58.339 14417.920 - 14518.745: 2.0652% ( 25) 00:07:58.339 14518.745 - 14619.569: 2.5951% ( 39) 00:07:58.339 14619.569 - 14720.394: 3.0435% ( 33) 00:07:58.339 14720.394 - 14821.218: 3.8451% ( 59) 00:07:58.339 14821.218 - 14922.043: 4.8505% ( 74) 00:07:58.339 14922.043 - 15022.868: 6.0190% ( 86) 00:07:58.339 15022.868 - 15123.692: 7.2826% ( 93) 00:07:58.339 15123.692 - 15224.517: 8.7500% ( 108) 00:07:58.339 15224.517 - 15325.342: 10.5163% ( 130) 00:07:58.339 15325.342 - 15426.166: 12.0516% ( 113) 00:07:58.339 15426.166 - 15526.991: 13.7092% ( 122) 00:07:58.339 15526.991 - 15627.815: 15.7745% ( 152) 00:07:58.339 15627.815 - 15728.640: 18.0435% ( 167) 00:07:58.339 15728.640 - 15829.465: 20.5435% ( 184) 00:07:58.339 15829.465 - 15930.289: 22.9348% ( 176) 00:07:58.339 15930.289 - 16031.114: 25.2717% ( 172) 00:07:58.339 16031.114 - 16131.938: 27.7989% ( 186) 00:07:58.339 16131.938 - 16232.763: 30.2310% ( 179) 00:07:58.339 16232.763 - 16333.588: 32.9891% ( 203) 00:07:58.339 16333.588 - 16434.412: 35.3125% ( 171) 00:07:58.339 16434.412 - 16535.237: 37.7310% ( 178) 00:07:58.339 16535.237 - 16636.062: 39.9185% ( 161) 00:07:58.339 16636.062 - 16736.886: 42.2011% ( 168) 00:07:58.339 16736.886 - 16837.711: 44.3886% ( 161) 00:07:58.339 16837.711 - 16938.535: 46.9973% ( 192) 00:07:58.339 16938.535 - 17039.360: 49.2527% ( 166) 00:07:58.339 17039.360 - 17140.185: 51.1413% ( 139) 00:07:58.339 17140.185 - 17241.009: 53.2609% ( 156) 00:07:58.339 17241.009 - 17341.834: 55.3125% ( 151) 00:07:58.339 17341.834 - 17442.658: 57.4049% ( 154) 00:07:58.339 17442.658 - 17543.483: 59.1712% ( 130) 00:07:58.339 17543.483 - 17644.308: 61.0870% ( 141) 00:07:58.339 17644.308 - 17745.132: 63.0299% ( 143) 00:07:58.339 17745.132 - 17845.957: 64.9321% ( 140) 00:07:58.339 17845.957 - 17946.782: 66.8342% ( 140) 00:07:58.339 17946.782 - 18047.606: 68.8587% ( 149) 00:07:58.339 18047.606 - 18148.431: 70.6658% ( 133) 00:07:58.339 18148.431 - 18249.255: 72.4457% ( 131) 00:07:58.339 18249.255 - 18350.080: 74.2391% ( 132) 00:07:58.339 18350.080 - 18450.905: 76.0326% ( 132) 00:07:58.339 18450.905 - 18551.729: 77.7446% ( 126) 00:07:58.339 18551.729 - 18652.554: 79.2527% ( 111) 00:07:58.339 18652.554 - 18753.378: 80.9375% ( 124) 00:07:58.339 18753.378 - 18854.203: 82.1603% ( 90) 00:07:58.339 18854.203 - 18955.028: 83.5870% ( 105) 00:07:58.339 18955.028 - 19055.852: 85.1630% ( 116) 00:07:58.339 19055.852 - 19156.677: 86.3995% ( 91) 00:07:58.339 19156.677 - 19257.502: 87.3234% ( 68) 00:07:58.339 19257.502 - 19358.326: 88.3288% ( 74) 00:07:58.339 19358.326 - 19459.151: 89.1576% ( 61) 00:07:58.339 19459.151 - 19559.975: 89.7011% ( 40) 00:07:58.339 19559.975 - 19660.800: 90.4620% ( 56) 00:07:58.339 19660.800 - 19761.625: 90.8967% ( 32) 00:07:58.339 19761.625 - 19862.449: 91.3587% ( 34) 00:07:58.339 19862.449 - 19963.274: 91.8071% ( 33) 00:07:58.339 19963.274 - 20064.098: 92.2147% ( 30) 00:07:58.339 20064.098 - 20164.923: 92.6495% ( 32) 00:07:58.339 20164.923 - 20265.748: 93.1522% ( 37) 00:07:58.339 20265.748 - 20366.572: 93.7228% ( 42) 00:07:58.339 20366.572 - 20467.397: 94.0353% ( 23) 00:07:58.339 20467.397 - 20568.222: 94.3342% ( 22) 00:07:58.339 20568.222 - 20669.046: 94.6196% ( 21) 00:07:58.339 20669.046 - 20769.871: 94.8913% ( 20) 00:07:58.339 20769.871 - 20870.695: 95.1630% ( 20) 00:07:58.339 20870.695 - 20971.520: 95.4348% ( 20) 00:07:58.339 20971.520 - 21072.345: 95.7065% ( 20) 00:07:58.339 21072.345 - 21173.169: 96.1141% ( 30) 00:07:58.339 21173.169 - 21273.994: 96.4266% ( 23) 00:07:58.339 21273.994 - 21374.818: 96.5897% ( 12) 00:07:58.339 21374.818 - 21475.643: 96.8614% ( 20) 00:07:58.339 21475.643 - 21576.468: 96.9293% ( 5) 00:07:58.339 21677.292 - 21778.117: 97.0924% ( 12) 00:07:58.339 21778.117 - 21878.942: 97.1196% ( 2) 00:07:58.339 21878.942 - 21979.766: 97.2554% ( 10) 00:07:58.339 21979.766 - 22080.591: 97.3913% ( 10) 00:07:58.339 22080.591 - 22181.415: 97.4592% ( 5) 00:07:58.339 22181.415 - 22282.240: 97.6495% ( 14) 00:07:58.339 22282.240 - 22383.065: 97.7310% ( 6) 00:07:58.339 22383.065 - 22483.889: 97.7717% ( 3) 00:07:58.339 22483.889 - 22584.714: 97.8125% ( 3) 00:07:58.339 22584.714 - 22685.538: 97.8940% ( 6) 00:07:58.339 22685.538 - 22786.363: 98.0027% ( 8) 00:07:58.339 22786.363 - 22887.188: 98.0163% ( 1) 00:07:58.339 22887.188 - 22988.012: 98.0707% ( 4) 00:07:58.339 22988.012 - 23088.837: 98.1386% ( 5) 00:07:58.339 23088.837 - 23189.662: 98.1929% ( 4) 00:07:58.339 23189.662 - 23290.486: 98.2473% ( 4) 00:07:58.339 23290.486 - 23391.311: 98.2609% ( 1) 00:07:58.339 29037.489 - 29239.138: 98.3560% ( 7) 00:07:58.339 29239.138 - 29440.788: 98.5190% ( 12) 00:07:58.339 29440.788 - 29642.437: 98.5870% ( 5) 00:07:58.339 29642.437 - 29844.086: 98.6957% ( 8) 00:07:58.339 29844.086 - 30045.735: 98.7364% ( 3) 00:07:58.339 30045.735 - 30247.385: 98.7772% ( 3) 00:07:58.339 30247.385 - 30449.034: 98.9538% ( 13) 00:07:58.339 30449.034 - 30650.683: 99.1304% ( 13) 00:07:58.339 37708.406 - 37910.055: 99.1712% ( 3) 00:07:58.339 37910.055 - 38111.705: 99.2935% ( 9) 00:07:58.339 38111.705 - 38313.354: 99.4158% ( 9) 00:07:58.339 38313.354 - 38515.003: 99.4973% ( 6) 00:07:58.339 38515.003 - 38716.652: 99.6196% ( 9) 00:07:58.339 38716.652 - 38918.302: 99.7554% ( 10) 00:07:58.339 38918.302 - 39119.951: 99.8777% ( 9) 00:07:58.339 39119.951 - 39321.600: 99.9457% ( 5) 00:07:58.339 39321.600 - 39523.249: 100.0000% ( 4) 00:07:58.339 00:07:58.339 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:58.339 ============================================================================== 00:07:58.339 Range in us Cumulative IO count 00:07:58.339 11796.480 - 11846.892: 0.0408% ( 3) 00:07:58.339 11846.892 - 11897.305: 0.0815% ( 3) 00:07:58.339 11897.305 - 11947.717: 0.1223% ( 3) 00:07:58.339 11947.717 - 11998.129: 0.1630% ( 3) 00:07:58.339 11998.129 - 12048.542: 0.2038% ( 3) 00:07:58.339 12048.542 - 12098.954: 0.2446% ( 3) 00:07:58.339 12098.954 - 12149.366: 0.2853% ( 3) 00:07:58.339 12149.366 - 12199.778: 0.3397% ( 4) 00:07:58.339 12199.778 - 12250.191: 0.3804% ( 3) 00:07:58.339 12250.191 - 12300.603: 0.4212% ( 3) 00:07:58.339 12300.603 - 12351.015: 0.4620% ( 3) 00:07:58.339 12351.015 - 12401.428: 0.5163% ( 4) 00:07:58.339 12401.428 - 12451.840: 0.5571% ( 3) 00:07:58.339 12451.840 - 12502.252: 0.6114% ( 4) 00:07:58.339 12502.252 - 12552.665: 0.6522% ( 3) 00:07:58.339 12552.665 - 12603.077: 0.6929% ( 3) 00:07:58.339 12603.077 - 12653.489: 0.7473% ( 4) 00:07:58.339 12653.489 - 12703.902: 0.7880% ( 3) 00:07:58.339 12703.902 - 12754.314: 0.8288% ( 3) 00:07:58.339 12754.314 - 12804.726: 0.8560% ( 2) 00:07:58.339 12804.726 - 12855.138: 0.8696% ( 1) 00:07:58.339 14014.622 - 14115.446: 0.9511% ( 6) 00:07:58.339 14115.446 - 14216.271: 1.0870% ( 10) 00:07:58.339 14216.271 - 14317.095: 1.2500% ( 12) 00:07:58.339 14317.095 - 14417.920: 1.5082% ( 19) 00:07:58.340 14417.920 - 14518.745: 1.8071% ( 22) 00:07:58.340 14518.745 - 14619.569: 2.1467% ( 25) 00:07:58.340 14619.569 - 14720.394: 2.6087% ( 34) 00:07:58.340 14720.394 - 14821.218: 3.2473% ( 47) 00:07:58.340 14821.218 - 14922.043: 4.0897% ( 62) 00:07:58.340 14922.043 - 15022.868: 5.0136% ( 68) 00:07:58.340 15022.868 - 15123.692: 6.1957% ( 87) 00:07:58.340 15123.692 - 15224.517: 7.3777% ( 87) 00:07:58.340 15224.517 - 15325.342: 8.7500% ( 101) 00:07:58.340 15325.342 - 15426.166: 10.1766% ( 105) 00:07:58.340 15426.166 - 15526.991: 12.0788% ( 140) 00:07:58.340 15526.991 - 15627.815: 13.8859% ( 133) 00:07:58.340 15627.815 - 15728.640: 15.8696% ( 146) 00:07:58.340 15728.640 - 15829.465: 18.2201% ( 173) 00:07:58.340 15829.465 - 15930.289: 20.8424% ( 193) 00:07:58.340 15930.289 - 16031.114: 23.3832% ( 187) 00:07:58.340 16031.114 - 16131.938: 26.0734% ( 198) 00:07:58.340 16131.938 - 16232.763: 29.1712% ( 228) 00:07:58.340 16232.763 - 16333.588: 32.1875% ( 222) 00:07:58.340 16333.588 - 16434.412: 35.2038% ( 222) 00:07:58.340 16434.412 - 16535.237: 38.3016% ( 228) 00:07:58.340 16535.237 - 16636.062: 41.3315% ( 223) 00:07:58.340 16636.062 - 16736.886: 43.9538% ( 193) 00:07:58.340 16736.886 - 16837.711: 46.3995% ( 180) 00:07:58.340 16837.711 - 16938.535: 48.9810% ( 190) 00:07:58.340 16938.535 - 17039.360: 51.4130% ( 179) 00:07:58.340 17039.360 - 17140.185: 53.7772% ( 174) 00:07:58.340 17140.185 - 17241.009: 55.6793% ( 140) 00:07:58.340 17241.009 - 17341.834: 57.2690% ( 117) 00:07:58.340 17341.834 - 17442.658: 58.6413% ( 101) 00:07:58.340 17442.658 - 17543.483: 60.4620% ( 134) 00:07:58.340 17543.483 - 17644.308: 62.0652% ( 118) 00:07:58.340 17644.308 - 17745.132: 63.6549% ( 117) 00:07:58.340 17745.132 - 17845.957: 65.2446% ( 117) 00:07:58.340 17845.957 - 17946.782: 66.7935% ( 114) 00:07:58.340 17946.782 - 18047.606: 68.4783% ( 124) 00:07:58.340 18047.606 - 18148.431: 70.1223% ( 121) 00:07:58.340 18148.431 - 18249.255: 71.7391% ( 119) 00:07:58.340 18249.255 - 18350.080: 73.2880% ( 114) 00:07:58.340 18350.080 - 18450.905: 75.0815% ( 132) 00:07:58.340 18450.905 - 18551.729: 76.8750% ( 132) 00:07:58.340 18551.729 - 18652.554: 78.5598% ( 124) 00:07:58.340 18652.554 - 18753.378: 80.0951% ( 113) 00:07:58.340 18753.378 - 18854.203: 81.4946% ( 103) 00:07:58.340 18854.203 - 18955.028: 82.8397% ( 99) 00:07:58.340 18955.028 - 19055.852: 84.2527% ( 104) 00:07:58.340 19055.852 - 19156.677: 85.4891% ( 91) 00:07:58.340 19156.677 - 19257.502: 86.5489% ( 78) 00:07:58.340 19257.502 - 19358.326: 87.6223% ( 79) 00:07:58.340 19358.326 - 19459.151: 88.5054% ( 65) 00:07:58.340 19459.151 - 19559.975: 89.4022% ( 66) 00:07:58.340 19559.975 - 19660.800: 90.1630% ( 56) 00:07:58.340 19660.800 - 19761.625: 90.8016% ( 47) 00:07:58.340 19761.625 - 19862.449: 91.4266% ( 46) 00:07:58.340 19862.449 - 19963.274: 92.0788% ( 48) 00:07:58.340 19963.274 - 20064.098: 92.7446% ( 49) 00:07:58.340 20064.098 - 20164.923: 93.3288% ( 43) 00:07:58.340 20164.923 - 20265.748: 93.8179% ( 36) 00:07:58.340 20265.748 - 20366.572: 94.1848% ( 27) 00:07:58.340 20366.572 - 20467.397: 94.6739% ( 36) 00:07:58.340 20467.397 - 20568.222: 95.1087% ( 32) 00:07:58.340 20568.222 - 20669.046: 95.5299% ( 31) 00:07:58.340 20669.046 - 20769.871: 95.8967% ( 27) 00:07:58.340 20769.871 - 20870.695: 96.1957% ( 22) 00:07:58.340 20870.695 - 20971.520: 96.4538% ( 19) 00:07:58.340 20971.520 - 21072.345: 96.7120% ( 19) 00:07:58.340 21072.345 - 21173.169: 96.8614% ( 11) 00:07:58.340 21173.169 - 21273.994: 96.9973% ( 10) 00:07:58.340 21273.994 - 21374.818: 97.0788% ( 6) 00:07:58.340 21374.818 - 21475.643: 97.1739% ( 7) 00:07:58.340 21475.643 - 21576.468: 97.2418% ( 5) 00:07:58.340 21576.468 - 21677.292: 97.3098% ( 5) 00:07:58.340 21677.292 - 21778.117: 97.3777% ( 5) 00:07:58.340 21778.117 - 21878.942: 97.3913% ( 1) 00:07:58.340 22181.415 - 22282.240: 97.4321% ( 3) 00:07:58.340 22282.240 - 22383.065: 97.6223% ( 14) 00:07:58.340 22383.065 - 22483.889: 97.7717% ( 11) 00:07:58.340 22483.889 - 22584.714: 97.7853% ( 1) 00:07:58.340 22584.714 - 22685.538: 97.8125% ( 2) 00:07:58.340 22685.538 - 22786.363: 97.8668% ( 4) 00:07:58.340 22786.363 - 22887.188: 97.9348% ( 5) 00:07:58.340 22887.188 - 22988.012: 98.0027% ( 5) 00:07:58.340 22988.012 - 23088.837: 98.0842% ( 6) 00:07:58.340 23088.837 - 23189.662: 98.1522% ( 5) 00:07:58.340 23189.662 - 23290.486: 98.2201% ( 5) 00:07:58.340 23290.486 - 23391.311: 98.2609% ( 3) 00:07:58.340 28230.892 - 28432.542: 98.3560% ( 7) 00:07:58.340 28432.542 - 28634.191: 98.4647% ( 8) 00:07:58.340 28634.191 - 28835.840: 98.5734% ( 8) 00:07:58.340 28835.840 - 29037.489: 98.6957% ( 9) 00:07:58.340 29037.489 - 29239.138: 98.8179% ( 9) 00:07:58.340 29239.138 - 29440.788: 98.9402% ( 9) 00:07:58.340 29440.788 - 29642.437: 99.0489% ( 8) 00:07:58.340 29642.437 - 29844.086: 99.1304% ( 6) 00:07:58.340 36901.809 - 37103.458: 99.2391% ( 8) 00:07:58.340 37103.458 - 37305.108: 99.3614% ( 9) 00:07:58.340 37305.108 - 37506.757: 99.4701% ( 8) 00:07:58.340 37506.757 - 37708.406: 99.5924% ( 9) 00:07:58.340 37708.406 - 37910.055: 99.7147% ( 9) 00:07:58.340 37910.055 - 38111.705: 99.8370% ( 9) 00:07:58.340 38111.705 - 38313.354: 99.9592% ( 9) 00:07:58.340 38313.354 - 38515.003: 100.0000% ( 3) 00:07:58.340 00:07:58.340 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:58.340 ============================================================================== 00:07:58.340 Range in us Cumulative IO count 00:07:58.340 8670.917 - 8721.329: 0.0136% ( 1) 00:07:58.340 8721.329 - 8771.742: 0.0408% ( 2) 00:07:58.340 8771.742 - 8822.154: 0.1087% ( 5) 00:07:58.340 8822.154 - 8872.566: 0.1495% ( 3) 00:07:58.340 8872.566 - 8922.978: 0.1902% ( 3) 00:07:58.340 8922.978 - 8973.391: 0.2310% ( 3) 00:07:58.340 8973.391 - 9023.803: 0.2717% ( 3) 00:07:58.340 9023.803 - 9074.215: 0.3125% ( 3) 00:07:58.340 9074.215 - 9124.628: 0.3533% ( 3) 00:07:58.340 9124.628 - 9175.040: 0.3940% ( 3) 00:07:58.340 9175.040 - 9225.452: 0.4212% ( 2) 00:07:58.340 9225.452 - 9275.865: 0.4755% ( 4) 00:07:58.340 9275.865 - 9326.277: 0.5163% ( 3) 00:07:58.340 9326.277 - 9376.689: 0.5571% ( 3) 00:07:58.340 9376.689 - 9427.102: 0.5978% ( 3) 00:07:58.340 9427.102 - 9477.514: 0.6386% ( 3) 00:07:58.340 9477.514 - 9527.926: 0.6793% ( 3) 00:07:58.340 9527.926 - 9578.338: 0.7201% ( 3) 00:07:58.340 9578.338 - 9628.751: 0.7609% ( 3) 00:07:58.340 9628.751 - 9679.163: 0.8152% ( 4) 00:07:58.340 9679.163 - 9729.575: 0.8560% ( 3) 00:07:58.340 9729.575 - 9779.988: 0.8696% ( 1) 00:07:58.340 13712.148 - 13812.972: 0.9239% ( 4) 00:07:58.340 13812.972 - 13913.797: 1.0870% ( 12) 00:07:58.340 13913.797 - 14014.622: 1.2228% ( 10) 00:07:58.340 14014.622 - 14115.446: 1.5353% ( 23) 00:07:58.340 14115.446 - 14216.271: 1.8342% ( 22) 00:07:58.340 14216.271 - 14317.095: 2.1739% ( 25) 00:07:58.340 14317.095 - 14417.920: 2.5000% ( 24) 00:07:58.340 14417.920 - 14518.745: 2.9755% ( 35) 00:07:58.340 14518.745 - 14619.569: 3.5734% ( 44) 00:07:58.340 14619.569 - 14720.394: 4.0082% ( 32) 00:07:58.340 14720.394 - 14821.218: 4.6332% ( 46) 00:07:58.340 14821.218 - 14922.043: 5.3533% ( 53) 00:07:58.340 14922.043 - 15022.868: 6.3043% ( 70) 00:07:58.340 15022.868 - 15123.692: 7.1060% ( 59) 00:07:58.340 15123.692 - 15224.517: 8.0299% ( 68) 00:07:58.340 15224.517 - 15325.342: 9.2527% ( 90) 00:07:58.340 15325.342 - 15426.166: 10.6522% ( 103) 00:07:58.340 15426.166 - 15526.991: 12.1332% ( 109) 00:07:58.340 15526.991 - 15627.815: 13.8723% ( 128) 00:07:58.340 15627.815 - 15728.640: 15.9103% ( 150) 00:07:58.340 15728.640 - 15829.465: 17.9212% ( 148) 00:07:58.340 15829.465 - 15930.289: 20.3261% ( 177) 00:07:58.340 15930.289 - 16031.114: 22.6359% ( 170) 00:07:58.340 16031.114 - 16131.938: 24.9457% ( 170) 00:07:58.340 16131.938 - 16232.763: 27.0788% ( 157) 00:07:58.340 16232.763 - 16333.588: 29.3071% ( 164) 00:07:58.340 16333.588 - 16434.412: 31.4946% ( 161) 00:07:58.340 16434.412 - 16535.237: 33.8587% ( 174) 00:07:58.340 16535.237 - 16636.062: 36.4674% ( 192) 00:07:58.340 16636.062 - 16736.886: 39.2391% ( 204) 00:07:58.340 16736.886 - 16837.711: 42.0924% ( 210) 00:07:58.340 16837.711 - 16938.535: 44.5516% ( 181) 00:07:58.340 16938.535 - 17039.360: 47.0245% ( 182) 00:07:58.340 17039.360 - 17140.185: 49.3614% ( 172) 00:07:58.340 17140.185 - 17241.009: 51.8886% ( 186) 00:07:58.340 17241.009 - 17341.834: 54.4701% ( 190) 00:07:58.340 17341.834 - 17442.658: 57.3370% ( 211) 00:07:58.340 17442.658 - 17543.483: 60.0815% ( 202) 00:07:58.340 17543.483 - 17644.308: 62.7310% ( 195) 00:07:58.340 17644.308 - 17745.132: 65.4891% ( 203) 00:07:58.340 17745.132 - 17845.957: 67.6495% ( 159) 00:07:58.340 17845.957 - 17946.782: 69.9728% ( 171) 00:07:58.340 17946.782 - 18047.606: 72.0924% ( 156) 00:07:58.340 18047.606 - 18148.431: 74.0625% ( 145) 00:07:58.340 18148.431 - 18249.255: 76.0190% ( 144) 00:07:58.340 18249.255 - 18350.080: 77.8668% ( 136) 00:07:58.340 18350.080 - 18450.905: 79.4022% ( 113) 00:07:58.340 18450.905 - 18551.729: 80.8152% ( 104) 00:07:58.340 18551.729 - 18652.554: 82.1196% ( 96) 00:07:58.340 18652.554 - 18753.378: 83.3288% ( 89) 00:07:58.340 18753.378 - 18854.203: 84.5245% ( 88) 00:07:58.340 18854.203 - 18955.028: 85.5842% ( 78) 00:07:58.340 18955.028 - 19055.852: 86.4946% ( 67) 00:07:58.340 19055.852 - 19156.677: 87.3370% ( 62) 00:07:58.340 19156.677 - 19257.502: 88.1793% ( 62) 00:07:58.340 19257.502 - 19358.326: 88.9674% ( 58) 00:07:58.340 19358.326 - 19459.151: 89.5109% ( 40) 00:07:58.341 19459.151 - 19559.975: 89.8234% ( 23) 00:07:58.341 19559.975 - 19660.800: 90.1087% ( 21) 00:07:58.341 19660.800 - 19761.625: 90.5435% ( 32) 00:07:58.341 19761.625 - 19862.449: 90.8696% ( 24) 00:07:58.341 19862.449 - 19963.274: 91.2092% ( 25) 00:07:58.341 19963.274 - 20064.098: 91.6576% ( 33) 00:07:58.341 20064.098 - 20164.923: 92.3098% ( 48) 00:07:58.341 20164.923 - 20265.748: 92.8261% ( 38) 00:07:58.341 20265.748 - 20366.572: 93.2880% ( 34) 00:07:58.341 20366.572 - 20467.397: 93.6141% ( 24) 00:07:58.341 20467.397 - 20568.222: 93.9538% ( 25) 00:07:58.341 20568.222 - 20669.046: 94.4293% ( 35) 00:07:58.341 20669.046 - 20769.871: 94.8641% ( 32) 00:07:58.341 20769.871 - 20870.695: 95.2582% ( 29) 00:07:58.341 20870.695 - 20971.520: 95.6929% ( 32) 00:07:58.341 20971.520 - 21072.345: 96.0326% ( 25) 00:07:58.341 21072.345 - 21173.169: 96.2772% ( 18) 00:07:58.341 21173.169 - 21273.994: 96.5217% ( 18) 00:07:58.341 21273.994 - 21374.818: 96.7255% ( 15) 00:07:58.341 21374.818 - 21475.643: 96.9022% ( 13) 00:07:58.341 21475.643 - 21576.468: 97.0380% ( 10) 00:07:58.341 21576.468 - 21677.292: 97.1603% ( 9) 00:07:58.341 21677.292 - 21778.117: 97.2962% ( 10) 00:07:58.341 21778.117 - 21878.942: 97.4185% ( 9) 00:07:58.341 21878.942 - 21979.766: 97.6495% ( 17) 00:07:58.341 21979.766 - 22080.591: 97.7582% ( 8) 00:07:58.341 22080.591 - 22181.415: 97.8261% ( 5) 00:07:58.341 22181.415 - 22282.240: 97.8533% ( 2) 00:07:58.341 22282.240 - 22383.065: 97.8940% ( 3) 00:07:58.341 22383.065 - 22483.889: 97.9620% ( 5) 00:07:58.341 22483.889 - 22584.714: 98.0163% ( 4) 00:07:58.341 22584.714 - 22685.538: 98.0842% ( 5) 00:07:58.341 22685.538 - 22786.363: 98.1114% ( 2) 00:07:58.341 22786.363 - 22887.188: 98.1658% ( 4) 00:07:58.341 22887.188 - 22988.012: 98.2065% ( 3) 00:07:58.341 22988.012 - 23088.837: 98.2473% ( 3) 00:07:58.341 23088.837 - 23189.662: 98.2609% ( 1) 00:07:58.341 28634.191 - 28835.840: 98.3288% ( 5) 00:07:58.341 28835.840 - 29037.489: 98.4375% ( 8) 00:07:58.341 29037.489 - 29239.138: 98.5462% ( 8) 00:07:58.341 29239.138 - 29440.788: 98.6685% ( 9) 00:07:58.341 29440.788 - 29642.437: 98.7772% ( 8) 00:07:58.341 29642.437 - 29844.086: 98.8859% ( 8) 00:07:58.341 29844.086 - 30045.735: 99.0082% ( 9) 00:07:58.341 30045.735 - 30247.385: 99.1168% ( 8) 00:07:58.341 30247.385 - 30449.034: 99.1304% ( 1) 00:07:58.341 37103.458 - 37305.108: 99.1440% ( 1) 00:07:58.341 37305.108 - 37506.757: 99.2391% ( 7) 00:07:58.341 37506.757 - 37708.406: 99.3614% ( 9) 00:07:58.341 37708.406 - 37910.055: 99.4837% ( 9) 00:07:58.341 37910.055 - 38111.705: 99.6060% ( 9) 00:07:58.341 38111.705 - 38313.354: 99.7147% ( 8) 00:07:58.341 38313.354 - 38515.003: 99.8370% ( 9) 00:07:58.341 38515.003 - 38716.652: 99.9592% ( 9) 00:07:58.341 38716.652 - 38918.302: 100.0000% ( 3) 00:07:58.341 00:07:58.341 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:58.341 ============================================================================== 00:07:58.341 Range in us Cumulative IO count 00:07:58.341 7410.609 - 7461.022: 0.0543% ( 4) 00:07:58.341 7461.022 - 7511.434: 0.0815% ( 2) 00:07:58.341 7511.434 - 7561.846: 0.1223% ( 3) 00:07:58.341 7561.846 - 7612.258: 0.1766% ( 4) 00:07:58.341 7612.258 - 7662.671: 0.2174% ( 3) 00:07:58.341 7662.671 - 7713.083: 0.2717% ( 4) 00:07:58.341 7713.083 - 7763.495: 0.3125% ( 3) 00:07:58.341 7763.495 - 7813.908: 0.3533% ( 3) 00:07:58.341 7813.908 - 7864.320: 0.3940% ( 3) 00:07:58.341 7864.320 - 7914.732: 0.4348% ( 3) 00:07:58.341 7914.732 - 7965.145: 0.4755% ( 3) 00:07:58.341 7965.145 - 8015.557: 0.5163% ( 3) 00:07:58.341 8015.557 - 8065.969: 0.5571% ( 3) 00:07:58.341 8065.969 - 8116.382: 0.5978% ( 3) 00:07:58.341 8116.382 - 8166.794: 0.6522% ( 4) 00:07:58.341 8166.794 - 8217.206: 0.6929% ( 3) 00:07:58.341 8217.206 - 8267.618: 0.7337% ( 3) 00:07:58.341 8267.618 - 8318.031: 0.7609% ( 2) 00:07:58.341 8318.031 - 8368.443: 0.8016% ( 3) 00:07:58.341 8368.443 - 8418.855: 0.8424% ( 3) 00:07:58.341 8418.855 - 8469.268: 0.8696% ( 2) 00:07:58.341 13510.498 - 13611.323: 0.9375% ( 5) 00:07:58.341 13611.323 - 13712.148: 1.0190% ( 6) 00:07:58.341 13712.148 - 13812.972: 1.0870% ( 5) 00:07:58.341 13812.972 - 13913.797: 1.1685% ( 6) 00:07:58.341 13913.797 - 14014.622: 1.2500% ( 6) 00:07:58.341 14014.622 - 14115.446: 1.4402% ( 14) 00:07:58.341 14115.446 - 14216.271: 1.8886% ( 33) 00:07:58.341 14216.271 - 14317.095: 2.3234% ( 32) 00:07:58.341 14317.095 - 14417.920: 2.6902% ( 27) 00:07:58.341 14417.920 - 14518.745: 3.0163% ( 24) 00:07:58.341 14518.745 - 14619.569: 3.4239% ( 30) 00:07:58.341 14619.569 - 14720.394: 3.9266% ( 37) 00:07:58.341 14720.394 - 14821.218: 4.4565% ( 39) 00:07:58.341 14821.218 - 14922.043: 5.1359% ( 50) 00:07:58.341 14922.043 - 15022.868: 5.8967% ( 56) 00:07:58.341 15022.868 - 15123.692: 6.7799% ( 65) 00:07:58.341 15123.692 - 15224.517: 7.8397% ( 78) 00:07:58.341 15224.517 - 15325.342: 9.0897% ( 92) 00:07:58.341 15325.342 - 15426.166: 10.2989% ( 89) 00:07:58.341 15426.166 - 15526.991: 11.6984% ( 103) 00:07:58.341 15526.991 - 15627.815: 13.3424% ( 121) 00:07:58.341 15627.815 - 15728.640: 15.3397% ( 147) 00:07:58.341 15728.640 - 15829.465: 17.3098% ( 145) 00:07:58.341 15829.465 - 15930.289: 19.3614% ( 151) 00:07:58.341 15930.289 - 16031.114: 21.5897% ( 164) 00:07:58.341 16031.114 - 16131.938: 23.9538% ( 174) 00:07:58.341 16131.938 - 16232.763: 26.3179% ( 174) 00:07:58.341 16232.763 - 16333.588: 28.9810% ( 196) 00:07:58.341 16333.588 - 16434.412: 31.6984% ( 200) 00:07:58.341 16434.412 - 16535.237: 34.5924% ( 213) 00:07:58.341 16535.237 - 16636.062: 37.2826% ( 198) 00:07:58.341 16636.062 - 16736.886: 40.1087% ( 208) 00:07:58.341 16736.886 - 16837.711: 43.0299% ( 215) 00:07:58.341 16837.711 - 16938.535: 45.8832% ( 210) 00:07:58.341 16938.535 - 17039.360: 48.7092% ( 208) 00:07:58.341 17039.360 - 17140.185: 51.5761% ( 211) 00:07:58.341 17140.185 - 17241.009: 54.3614% ( 205) 00:07:58.341 17241.009 - 17341.834: 56.9973% ( 194) 00:07:58.341 17341.834 - 17442.658: 59.3750% ( 175) 00:07:58.341 17442.658 - 17543.483: 62.0109% ( 194) 00:07:58.341 17543.483 - 17644.308: 64.5109% ( 184) 00:07:58.341 17644.308 - 17745.132: 66.6848% ( 160) 00:07:58.341 17745.132 - 17845.957: 68.7228% ( 150) 00:07:58.341 17845.957 - 17946.782: 70.6793% ( 144) 00:07:58.341 17946.782 - 18047.606: 72.5272% ( 136) 00:07:58.341 18047.606 - 18148.431: 74.4293% ( 140) 00:07:58.341 18148.431 - 18249.255: 76.1685% ( 128) 00:07:58.341 18249.255 - 18350.080: 77.8804% ( 126) 00:07:58.341 18350.080 - 18450.905: 79.3886% ( 111) 00:07:58.341 18450.905 - 18551.729: 80.9375% ( 114) 00:07:58.341 18551.729 - 18652.554: 82.2826% ( 99) 00:07:58.341 18652.554 - 18753.378: 83.4375% ( 85) 00:07:58.341 18753.378 - 18854.203: 84.5652% ( 83) 00:07:58.341 18854.203 - 18955.028: 85.4348% ( 64) 00:07:58.341 18955.028 - 19055.852: 86.2500% ( 60) 00:07:58.341 19055.852 - 19156.677: 86.8886% ( 47) 00:07:58.341 19156.677 - 19257.502: 87.4457% ( 41) 00:07:58.341 19257.502 - 19358.326: 87.9348% ( 36) 00:07:58.341 19358.326 - 19459.151: 88.5462% ( 45) 00:07:58.341 19459.151 - 19559.975: 89.1033% ( 41) 00:07:58.341 19559.975 - 19660.800: 89.6332% ( 39) 00:07:58.341 19660.800 - 19761.625: 90.3533% ( 53) 00:07:58.341 19761.625 - 19862.449: 90.8832% ( 39) 00:07:58.341 19862.449 - 19963.274: 91.3995% ( 38) 00:07:58.341 19963.274 - 20064.098: 91.8614% ( 34) 00:07:58.341 20064.098 - 20164.923: 92.2962% ( 32) 00:07:58.341 20164.923 - 20265.748: 92.7582% ( 34) 00:07:58.341 20265.748 - 20366.572: 93.2880% ( 39) 00:07:58.341 20366.572 - 20467.397: 93.8179% ( 39) 00:07:58.341 20467.397 - 20568.222: 94.2935% ( 35) 00:07:58.341 20568.222 - 20669.046: 94.6875% ( 29) 00:07:58.341 20669.046 - 20769.871: 94.9321% ( 18) 00:07:58.341 20769.871 - 20870.695: 95.1223% ( 14) 00:07:58.341 20870.695 - 20971.520: 95.2853% ( 12) 00:07:58.341 20971.520 - 21072.345: 95.4891% ( 15) 00:07:58.341 21072.345 - 21173.169: 95.7473% ( 19) 00:07:58.341 21173.169 - 21273.994: 95.9783% ( 17) 00:07:58.341 21273.994 - 21374.818: 96.1549% ( 13) 00:07:58.341 21374.818 - 21475.643: 96.3179% ( 12) 00:07:58.341 21475.643 - 21576.468: 96.4538% ( 10) 00:07:58.341 21576.468 - 21677.292: 96.5897% ( 10) 00:07:58.341 21677.292 - 21778.117: 96.7255% ( 10) 00:07:58.341 21778.117 - 21878.942: 96.8614% ( 10) 00:07:58.341 21878.942 - 21979.766: 97.0380% ( 13) 00:07:58.341 21979.766 - 22080.591: 97.2554% ( 16) 00:07:58.341 22080.591 - 22181.415: 97.4457% ( 14) 00:07:58.341 22181.415 - 22282.240: 97.6087% ( 12) 00:07:58.341 22282.240 - 22383.065: 97.7174% ( 8) 00:07:58.341 22383.065 - 22483.889: 97.7853% ( 5) 00:07:58.341 22483.889 - 22584.714: 97.8533% ( 5) 00:07:58.341 22584.714 - 22685.538: 97.9212% ( 5) 00:07:58.341 22685.538 - 22786.363: 97.9891% ( 5) 00:07:58.341 22786.363 - 22887.188: 98.0571% ( 5) 00:07:58.341 22887.188 - 22988.012: 98.1250% ( 5) 00:07:58.341 22988.012 - 23088.837: 98.1793% ( 4) 00:07:58.341 23088.837 - 23189.662: 98.2473% ( 5) 00:07:58.341 23189.662 - 23290.486: 98.2609% ( 1) 00:07:58.341 28230.892 - 28432.542: 98.2745% ( 1) 00:07:58.341 28432.542 - 28634.191: 98.3832% ( 8) 00:07:58.341 28634.191 - 28835.840: 98.5054% ( 9) 00:07:58.341 28835.840 - 29037.489: 98.6277% ( 9) 00:07:58.342 29037.489 - 29239.138: 98.7500% ( 9) 00:07:58.342 29239.138 - 29440.788: 98.8723% ( 9) 00:07:58.342 29440.788 - 29642.437: 98.9810% ( 8) 00:07:58.342 29642.437 - 29844.086: 99.1033% ( 9) 00:07:58.342 29844.086 - 30045.735: 99.1304% ( 2) 00:07:58.342 36498.511 - 36700.160: 99.1848% ( 4) 00:07:58.342 36700.160 - 36901.809: 99.3071% ( 9) 00:07:58.342 36901.809 - 37103.458: 99.4293% ( 9) 00:07:58.342 37103.458 - 37305.108: 99.5516% ( 9) 00:07:58.342 37305.108 - 37506.757: 99.6739% ( 9) 00:07:58.342 37506.757 - 37708.406: 99.7826% ( 8) 00:07:58.342 37708.406 - 37910.055: 99.9049% ( 9) 00:07:58.342 37910.055 - 38111.705: 100.0000% ( 7) 00:07:58.342 00:07:58.342 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:58.342 ============================================================================== 00:07:58.342 Range in us Cumulative IO count 00:07:58.342 6276.332 - 6301.538: 0.0135% ( 1) 00:07:58.342 6301.538 - 6326.745: 0.0404% ( 2) 00:07:58.342 6326.745 - 6351.951: 0.0539% ( 1) 00:07:58.342 6351.951 - 6377.157: 0.0808% ( 2) 00:07:58.342 6377.157 - 6402.363: 0.0943% ( 1) 00:07:58.342 6402.363 - 6427.569: 0.1751% ( 6) 00:07:58.342 6427.569 - 6452.775: 0.2020% ( 2) 00:07:58.342 6452.775 - 6503.188: 0.2694% ( 5) 00:07:58.342 6503.188 - 6553.600: 0.3098% ( 3) 00:07:58.342 6553.600 - 6604.012: 0.3502% ( 3) 00:07:58.342 6604.012 - 6654.425: 0.4041% ( 4) 00:07:58.342 6654.425 - 6704.837: 0.4310% ( 2) 00:07:58.342 6704.837 - 6755.249: 0.4714% ( 3) 00:07:58.342 6755.249 - 6805.662: 0.4849% ( 1) 00:07:58.342 6805.662 - 6856.074: 0.5119% ( 2) 00:07:58.342 6856.074 - 6906.486: 0.5657% ( 4) 00:07:58.342 6906.486 - 6956.898: 0.6061% ( 3) 00:07:58.342 6956.898 - 7007.311: 0.6466% ( 3) 00:07:58.342 7007.311 - 7057.723: 0.6870% ( 3) 00:07:58.342 7057.723 - 7108.135: 0.7139% ( 2) 00:07:58.342 7108.135 - 7158.548: 0.7543% ( 3) 00:07:58.342 7158.548 - 7208.960: 0.7947% ( 3) 00:07:58.342 7208.960 - 7259.372: 0.8351% ( 3) 00:07:58.342 7259.372 - 7309.785: 0.8621% ( 2) 00:07:58.342 13712.148 - 13812.972: 0.9159% ( 4) 00:07:58.342 13812.972 - 13913.797: 0.9698% ( 4) 00:07:58.342 13913.797 - 14014.622: 1.0372% ( 5) 00:07:58.342 14014.622 - 14115.446: 1.1180% ( 6) 00:07:58.342 14115.446 - 14216.271: 1.3200% ( 15) 00:07:58.342 14216.271 - 14317.095: 1.6029% ( 21) 00:07:58.342 14317.095 - 14417.920: 2.0609% ( 34) 00:07:58.342 14417.920 - 14518.745: 2.4111% ( 26) 00:07:58.342 14518.745 - 14619.569: 2.8960% ( 36) 00:07:58.342 14619.569 - 14720.394: 3.4752% ( 43) 00:07:58.342 14720.394 - 14821.218: 4.1622% ( 51) 00:07:58.342 14821.218 - 14922.043: 4.9300% ( 57) 00:07:58.342 14922.043 - 15022.868: 5.9133% ( 73) 00:07:58.342 15022.868 - 15123.692: 6.7753% ( 64) 00:07:58.342 15123.692 - 15224.517: 7.8125% ( 77) 00:07:58.342 15224.517 - 15325.342: 9.0787% ( 94) 00:07:58.342 15325.342 - 15426.166: 10.6008% ( 113) 00:07:58.342 15426.166 - 15526.991: 12.2306% ( 121) 00:07:58.342 15526.991 - 15627.815: 14.1298% ( 141) 00:07:58.342 15627.815 - 15728.640: 16.0830% ( 145) 00:07:58.342 15728.640 - 15829.465: 18.1843% ( 156) 00:07:58.342 15829.465 - 15930.289: 20.7031% ( 187) 00:07:58.342 15930.289 - 16031.114: 23.5453% ( 211) 00:07:58.342 16031.114 - 16131.938: 26.2931% ( 204) 00:07:58.342 16131.938 - 16232.763: 29.0275% ( 203) 00:07:58.342 16232.763 - 16333.588: 31.4925% ( 183) 00:07:58.342 16333.588 - 16434.412: 34.2403% ( 204) 00:07:58.342 16434.412 - 16535.237: 37.1498% ( 216) 00:07:58.342 16535.237 - 16636.062: 40.1266% ( 221) 00:07:58.342 16636.062 - 16736.886: 42.9149% ( 207) 00:07:58.342 16736.886 - 16837.711: 45.5954% ( 199) 00:07:58.342 16837.711 - 16938.535: 48.3028% ( 201) 00:07:58.342 16938.535 - 17039.360: 51.0776% ( 206) 00:07:58.342 17039.360 - 17140.185: 53.6234% ( 189) 00:07:58.342 17140.185 - 17241.009: 55.7381% ( 157) 00:07:58.342 17241.009 - 17341.834: 57.6913% ( 145) 00:07:58.342 17341.834 - 17442.658: 59.7252% ( 151) 00:07:58.342 17442.658 - 17543.483: 61.7188% ( 148) 00:07:58.342 17543.483 - 17644.308: 63.5506% ( 136) 00:07:58.342 17644.308 - 17745.132: 65.4230% ( 139) 00:07:58.342 17745.132 - 17845.957: 67.3087% ( 140) 00:07:58.342 17845.957 - 17946.782: 69.1676% ( 138) 00:07:58.342 17946.782 - 18047.606: 70.8782% ( 127) 00:07:58.342 18047.606 - 18148.431: 72.5350% ( 123) 00:07:58.342 18148.431 - 18249.255: 74.1514% ( 120) 00:07:58.342 18249.255 - 18350.080: 75.7812% ( 121) 00:07:58.342 18350.080 - 18450.905: 77.3976% ( 120) 00:07:58.342 18450.905 - 18551.729: 78.9601% ( 116) 00:07:58.342 18551.729 - 18652.554: 80.4553% ( 111) 00:07:58.342 18652.554 - 18753.378: 81.9504% ( 111) 00:07:58.342 18753.378 - 18854.203: 83.3648% ( 105) 00:07:58.342 18854.203 - 18955.028: 84.7522% ( 103) 00:07:58.342 18955.028 - 19055.852: 86.0318% ( 95) 00:07:58.342 19055.852 - 19156.677: 87.2171% ( 88) 00:07:58.342 19156.677 - 19257.502: 88.2947% ( 80) 00:07:58.342 19257.502 - 19358.326: 89.2241% ( 69) 00:07:58.342 19358.326 - 19459.151: 89.9380% ( 53) 00:07:58.342 19459.151 - 19559.975: 90.5172% ( 43) 00:07:58.342 19559.975 - 19660.800: 91.0156% ( 37) 00:07:58.342 19660.800 - 19761.625: 91.4736% ( 34) 00:07:58.342 19761.625 - 19862.449: 91.9720% ( 37) 00:07:58.342 19862.449 - 19963.274: 92.4434% ( 35) 00:07:58.342 19963.274 - 20064.098: 92.8206% ( 28) 00:07:58.342 20064.098 - 20164.923: 93.1977% ( 28) 00:07:58.342 20164.923 - 20265.748: 93.6692% ( 35) 00:07:58.342 20265.748 - 20366.572: 94.0867% ( 31) 00:07:58.342 20366.572 - 20467.397: 94.4639% ( 28) 00:07:58.342 20467.397 - 20568.222: 94.7872% ( 24) 00:07:58.342 20568.222 - 20669.046: 95.0700% ( 21) 00:07:58.342 20669.046 - 20769.871: 95.3394% ( 20) 00:07:58.342 20769.871 - 20870.695: 95.5819% ( 18) 00:07:58.342 20870.695 - 20971.520: 95.8782% ( 22) 00:07:58.342 20971.520 - 21072.345: 96.2150% ( 25) 00:07:58.342 21072.345 - 21173.169: 96.4305% ( 16) 00:07:58.342 21173.169 - 21273.994: 96.5517% ( 9) 00:07:58.342 21273.994 - 21374.818: 96.6864% ( 10) 00:07:58.342 21374.818 - 21475.643: 96.8211% ( 10) 00:07:58.342 21475.643 - 21576.468: 97.0097% ( 14) 00:07:58.342 21576.468 - 21677.292: 97.2522% ( 18) 00:07:58.342 21677.292 - 21778.117: 97.4407% ( 14) 00:07:58.342 21778.117 - 21878.942: 97.6158% ( 13) 00:07:58.342 21878.942 - 21979.766: 97.8179% ( 15) 00:07:58.342 21979.766 - 22080.591: 98.0199% ( 15) 00:07:58.342 22080.591 - 22181.415: 98.2085% ( 14) 00:07:58.342 22181.415 - 22282.240: 98.3297% ( 9) 00:07:58.342 22282.240 - 22383.065: 98.4779% ( 11) 00:07:58.342 22383.065 - 22483.889: 98.6126% ( 10) 00:07:58.342 22483.889 - 22584.714: 98.7473% ( 10) 00:07:58.342 22584.714 - 22685.538: 98.8820% ( 10) 00:07:58.342 22685.538 - 22786.363: 99.0167% ( 10) 00:07:58.342 22786.363 - 22887.188: 99.0841% ( 5) 00:07:58.342 22887.188 - 22988.012: 99.1379% ( 4) 00:07:58.342 28432.542 - 28634.191: 99.2053% ( 5) 00:07:58.342 28634.191 - 28835.840: 99.3265% ( 9) 00:07:58.342 28835.840 - 29037.489: 99.4343% ( 8) 00:07:58.342 29037.489 - 29239.138: 99.5555% ( 9) 00:07:58.342 29239.138 - 29440.788: 99.6633% ( 8) 00:07:58.342 29440.788 - 29642.437: 99.7845% ( 9) 00:07:58.342 29642.437 - 29844.086: 99.8922% ( 8) 00:07:58.342 29844.086 - 30045.735: 100.0000% ( 8) 00:07:58.342 00:07:58.342 23:13:21 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:59.732 Initializing NVMe Controllers 00:07:59.732 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:59.732 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:59.732 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:59.732 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:59.732 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:59.732 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:59.732 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:59.732 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:59.732 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:59.732 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:59.732 Initialization complete. Launching workers. 00:07:59.732 ======================================================== 00:07:59.732 Latency(us) 00:07:59.732 Device Information : IOPS MiB/s Average min max 00:07:59.732 PCIE (0000:00:13.0) NSID 1 from core 0: 8031.99 94.12 15952.99 11519.57 37909.17 00:07:59.732 PCIE (0000:00:10.0) NSID 1 from core 0: 8031.99 94.12 15938.84 10645.21 38417.25 00:07:59.732 PCIE (0000:00:11.0) NSID 1 from core 0: 8031.99 94.12 15921.70 9623.92 38025.39 00:07:59.732 PCIE (0000:00:12.0) NSID 1 from core 0: 8031.99 94.12 15904.59 7633.16 39680.72 00:07:59.732 PCIE (0000:00:12.0) NSID 2 from core 0: 8031.99 94.12 15887.64 7042.40 39644.88 00:07:59.732 PCIE (0000:00:12.0) NSID 3 from core 0: 8095.74 94.87 15745.79 6068.95 29114.30 00:07:59.732 ======================================================== 00:07:59.732 Total : 48255.70 565.50 15891.73 6068.95 39680.72 00:07:59.732 00:07:59.732 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:59.732 ================================================================================= 00:07:59.732 1.00000% : 12451.840us 00:07:59.732 10.00000% : 13913.797us 00:07:59.732 25.00000% : 14619.569us 00:07:59.732 50.00000% : 15526.991us 00:07:59.732 75.00000% : 16736.886us 00:07:59.732 90.00000% : 18249.255us 00:07:59.732 95.00000% : 18955.028us 00:07:59.732 98.00000% : 20870.695us 00:07:59.732 99.00000% : 27625.945us 00:07:59.732 99.50000% : 36901.809us 00:07:59.732 99.90000% : 37910.055us 00:07:59.732 99.99000% : 37910.055us 00:07:59.732 99.99900% : 37910.055us 00:07:59.732 99.99990% : 37910.055us 00:07:59.732 99.99999% : 37910.055us 00:07:59.732 00:07:59.732 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:59.732 ================================================================================= 00:07:59.732 1.00000% : 12552.665us 00:07:59.732 10.00000% : 13812.972us 00:07:59.732 25.00000% : 14518.745us 00:07:59.732 50.00000% : 15627.815us 00:07:59.732 75.00000% : 16837.711us 00:07:59.732 90.00000% : 18249.255us 00:07:59.732 95.00000% : 18955.028us 00:07:59.732 98.00000% : 20870.695us 00:07:59.732 99.00000% : 27222.646us 00:07:59.732 99.50000% : 37708.406us 00:07:59.732 99.90000% : 38313.354us 00:07:59.732 99.99000% : 38515.003us 00:07:59.732 99.99900% : 38515.003us 00:07:59.732 99.99990% : 38515.003us 00:07:59.732 99.99999% : 38515.003us 00:07:59.732 00:07:59.732 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:59.732 ================================================================================= 00:07:59.733 1.00000% : 12905.551us 00:07:59.733 10.00000% : 13812.972us 00:07:59.733 25.00000% : 14619.569us 00:07:59.733 50.00000% : 15627.815us 00:07:59.733 75.00000% : 16736.886us 00:07:59.733 90.00000% : 18148.431us 00:07:59.733 95.00000% : 18753.378us 00:07:59.733 98.00000% : 21173.169us 00:07:59.733 99.00000% : 27020.997us 00:07:59.733 99.50000% : 37305.108us 00:07:59.733 99.90000% : 37910.055us 00:07:59.733 99.99000% : 38111.705us 00:07:59.733 99.99900% : 38111.705us 00:07:59.733 99.99990% : 38111.705us 00:07:59.733 99.99999% : 38111.705us 00:07:59.733 00:07:59.733 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:59.733 ================================================================================= 00:07:59.733 1.00000% : 12653.489us 00:07:59.733 10.00000% : 13913.797us 00:07:59.733 25.00000% : 14619.569us 00:07:59.733 50.00000% : 15627.815us 00:07:59.733 75.00000% : 16636.062us 00:07:59.733 90.00000% : 18148.431us 00:07:59.733 95.00000% : 18955.028us 00:07:59.733 98.00000% : 21576.468us 00:07:59.733 99.00000% : 27625.945us 00:07:59.733 99.50000% : 38918.302us 00:07:59.733 99.90000% : 39724.898us 00:07:59.733 99.99000% : 39724.898us 00:07:59.733 99.99900% : 39724.898us 00:07:59.733 99.99990% : 39724.898us 00:07:59.733 99.99999% : 39724.898us 00:07:59.733 00:07:59.733 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:59.733 ================================================================================= 00:07:59.733 1.00000% : 12149.366us 00:07:59.733 10.00000% : 13913.797us 00:07:59.733 25.00000% : 14518.745us 00:07:59.733 50.00000% : 15627.815us 00:07:59.733 75.00000% : 16636.062us 00:07:59.733 90.00000% : 18047.606us 00:07:59.733 95.00000% : 18854.203us 00:07:59.733 98.00000% : 21273.994us 00:07:59.733 99.00000% : 28230.892us 00:07:59.733 99.50000% : 38918.302us 00:07:59.733 99.90000% : 39523.249us 00:07:59.733 99.99000% : 39724.898us 00:07:59.733 99.99900% : 39724.898us 00:07:59.733 99.99990% : 39724.898us 00:07:59.733 99.99999% : 39724.898us 00:07:59.733 00:07:59.733 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:59.733 ================================================================================= 00:07:59.733 1.00000% : 12351.015us 00:07:59.733 10.00000% : 13913.797us 00:07:59.733 25.00000% : 14619.569us 00:07:59.733 50.00000% : 15526.991us 00:07:59.733 75.00000% : 16636.062us 00:07:59.733 90.00000% : 18249.255us 00:07:59.733 95.00000% : 18753.378us 00:07:59.733 98.00000% : 20265.748us 00:07:59.733 99.00000% : 21576.468us 00:07:59.733 99.50000% : 28432.542us 00:07:59.733 99.90000% : 29037.489us 00:07:59.733 99.99000% : 29239.138us 00:07:59.733 99.99900% : 29239.138us 00:07:59.733 99.99990% : 29239.138us 00:07:59.733 99.99999% : 29239.138us 00:07:59.733 00:07:59.733 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:59.733 ============================================================================== 00:07:59.733 Range in us Cumulative IO count 00:07:59.733 11494.006 - 11544.418: 0.0248% ( 2) 00:07:59.733 11544.418 - 11594.831: 0.0620% ( 3) 00:07:59.733 11594.831 - 11645.243: 0.1364% ( 6) 00:07:59.733 11645.243 - 11695.655: 0.2852% ( 12) 00:07:59.733 11695.655 - 11746.068: 0.4340% ( 12) 00:07:59.733 11746.068 - 11796.480: 0.4960% ( 5) 00:07:59.733 11796.480 - 11846.892: 0.5456% ( 4) 00:07:59.733 11846.892 - 11897.305: 0.5952% ( 4) 00:07:59.733 11897.305 - 11947.717: 0.6448% ( 4) 00:07:59.733 11947.717 - 11998.129: 0.6820% ( 3) 00:07:59.733 11998.129 - 12048.542: 0.7192% ( 3) 00:07:59.733 12048.542 - 12098.954: 0.7564% ( 3) 00:07:59.733 12098.954 - 12149.366: 0.8061% ( 4) 00:07:59.733 12149.366 - 12199.778: 0.8433% ( 3) 00:07:59.733 12199.778 - 12250.191: 0.8681% ( 2) 00:07:59.733 12250.191 - 12300.603: 0.9177% ( 4) 00:07:59.733 12300.603 - 12351.015: 0.9301% ( 1) 00:07:59.733 12351.015 - 12401.428: 0.9673% ( 3) 00:07:59.733 12401.428 - 12451.840: 1.0045% ( 3) 00:07:59.733 12451.840 - 12502.252: 1.0789% ( 6) 00:07:59.733 12502.252 - 12552.665: 1.1409% ( 5) 00:07:59.733 12552.665 - 12603.077: 1.2897% ( 12) 00:07:59.733 12603.077 - 12653.489: 1.3269% ( 3) 00:07:59.733 12653.489 - 12703.902: 1.3889% ( 5) 00:07:59.733 12703.902 - 12754.314: 1.4509% ( 5) 00:07:59.733 12754.314 - 12804.726: 1.5253% ( 6) 00:07:59.733 12804.726 - 12855.138: 1.5997% ( 6) 00:07:59.733 12855.138 - 12905.551: 1.6617% ( 5) 00:07:59.733 12905.551 - 13006.375: 1.8105% ( 12) 00:07:59.733 13006.375 - 13107.200: 2.1081% ( 24) 00:07:59.733 13107.200 - 13208.025: 2.4306% ( 26) 00:07:59.733 13208.025 - 13308.849: 3.1498% ( 58) 00:07:59.733 13308.849 - 13409.674: 4.3155% ( 94) 00:07:59.733 13409.674 - 13510.498: 5.9896% ( 135) 00:07:59.733 13510.498 - 13611.323: 7.0188% ( 83) 00:07:59.733 13611.323 - 13712.148: 8.3581% ( 108) 00:07:59.733 13712.148 - 13812.972: 9.6974% ( 108) 00:07:59.733 13812.972 - 13913.797: 10.8383% ( 92) 00:07:59.733 13913.797 - 14014.622: 12.4876% ( 133) 00:07:59.733 14014.622 - 14115.446: 14.0749% ( 128) 00:07:59.733 14115.446 - 14216.271: 16.0962% ( 163) 00:07:59.733 14216.271 - 14317.095: 18.7624% ( 215) 00:07:59.733 14317.095 - 14417.920: 21.2674% ( 202) 00:07:59.733 14417.920 - 14518.745: 23.6979% ( 196) 00:07:59.733 14518.745 - 14619.569: 26.6493% ( 238) 00:07:59.733 14619.569 - 14720.394: 29.0923% ( 197) 00:07:59.733 14720.394 - 14821.218: 31.7088% ( 211) 00:07:59.733 14821.218 - 14922.043: 34.2386% ( 204) 00:07:59.733 14922.043 - 15022.868: 36.8056% ( 207) 00:07:59.733 15022.868 - 15123.692: 39.5585% ( 222) 00:07:59.733 15123.692 - 15224.517: 42.3859% ( 228) 00:07:59.733 15224.517 - 15325.342: 45.3125% ( 236) 00:07:59.733 15325.342 - 15426.166: 48.1895% ( 232) 00:07:59.733 15426.166 - 15526.991: 51.0665% ( 232) 00:07:59.733 15526.991 - 15627.815: 53.4722% ( 194) 00:07:59.733 15627.815 - 15728.640: 56.1508% ( 216) 00:07:59.733 15728.640 - 15829.465: 58.1349% ( 160) 00:07:59.733 15829.465 - 15930.289: 60.0942% ( 158) 00:07:59.733 15930.289 - 16031.114: 62.6860% ( 209) 00:07:59.733 16031.114 - 16131.938: 64.8313% ( 173) 00:07:59.733 16131.938 - 16232.763: 67.2123% ( 192) 00:07:59.733 16232.763 - 16333.588: 69.0848% ( 151) 00:07:59.733 16333.588 - 16434.412: 70.9573% ( 151) 00:07:59.733 16434.412 - 16535.237: 72.8299% ( 151) 00:07:59.733 16535.237 - 16636.062: 74.3056% ( 119) 00:07:59.733 16636.062 - 16736.886: 76.0417% ( 140) 00:07:59.733 16736.886 - 16837.711: 77.4554% ( 114) 00:07:59.733 16837.711 - 16938.535: 79.1791% ( 139) 00:07:59.733 16938.535 - 17039.360: 80.5556% ( 111) 00:07:59.733 17039.360 - 17140.185: 81.7336% ( 95) 00:07:59.733 17140.185 - 17241.009: 82.6637% ( 75) 00:07:59.733 17241.009 - 17341.834: 83.3705% ( 57) 00:07:59.733 17341.834 - 17442.658: 84.0898% ( 58) 00:07:59.733 17442.658 - 17543.483: 84.6478% ( 45) 00:07:59.733 17543.483 - 17644.308: 85.1190% ( 38) 00:07:59.733 17644.308 - 17745.132: 85.7019% ( 47) 00:07:59.733 17745.132 - 17845.957: 86.5079% ( 65) 00:07:59.733 17845.957 - 17946.782: 87.4256% ( 74) 00:07:59.733 17946.782 - 18047.606: 88.2440% ( 66) 00:07:59.733 18047.606 - 18148.431: 89.2361% ( 80) 00:07:59.733 18148.431 - 18249.255: 90.5754% ( 108) 00:07:59.733 18249.255 - 18350.080: 91.4807% ( 73) 00:07:59.733 18350.080 - 18450.905: 92.4479% ( 78) 00:07:59.733 18450.905 - 18551.729: 93.1672% ( 58) 00:07:59.733 18551.729 - 18652.554: 93.9732% ( 65) 00:07:59.733 18652.554 - 18753.378: 94.5437% ( 46) 00:07:59.733 18753.378 - 18854.203: 94.9777% ( 35) 00:07:59.733 18854.203 - 18955.028: 95.3249% ( 28) 00:07:59.733 18955.028 - 19055.852: 95.4737% ( 12) 00:07:59.733 19055.852 - 19156.677: 95.6597% ( 15) 00:07:59.733 19156.677 - 19257.502: 95.8953% ( 19) 00:07:59.733 19257.502 - 19358.326: 96.0441% ( 12) 00:07:59.733 19358.326 - 19459.151: 96.1806% ( 11) 00:07:59.733 19459.151 - 19559.975: 96.3170% ( 11) 00:07:59.733 19559.975 - 19660.800: 96.4658% ( 12) 00:07:59.733 19660.800 - 19761.625: 96.5650% ( 8) 00:07:59.733 19761.625 - 19862.449: 96.6394% ( 6) 00:07:59.733 19862.449 - 19963.274: 96.7014% ( 5) 00:07:59.733 19963.274 - 20064.098: 96.7634% ( 5) 00:07:59.733 20064.098 - 20164.923: 96.8254% ( 5) 00:07:59.733 20164.923 - 20265.748: 96.8502% ( 2) 00:07:59.733 20265.748 - 20366.572: 96.9618% ( 9) 00:07:59.733 20366.572 - 20467.397: 97.0982% ( 11) 00:07:59.733 20467.397 - 20568.222: 97.2718% ( 14) 00:07:59.733 20568.222 - 20669.046: 97.5446% ( 22) 00:07:59.733 20669.046 - 20769.871: 97.8919% ( 28) 00:07:59.733 20769.871 - 20870.695: 98.0903% ( 16) 00:07:59.733 20870.695 - 20971.520: 98.2763% ( 15) 00:07:59.733 20971.520 - 21072.345: 98.3631% ( 7) 00:07:59.733 21072.345 - 21173.169: 98.4127% ( 4) 00:07:59.733 26617.698 - 26819.348: 98.5615% ( 12) 00:07:59.733 26819.348 - 27020.997: 98.7351% ( 14) 00:07:59.733 27020.997 - 27222.646: 98.8467% ( 9) 00:07:59.733 27222.646 - 27424.295: 98.9707% ( 10) 00:07:59.733 27424.295 - 27625.945: 99.0947% ( 10) 00:07:59.733 27625.945 - 27827.594: 99.2063% ( 9) 00:07:59.733 36296.862 - 36498.511: 99.2436% ( 3) 00:07:59.733 36498.511 - 36700.160: 99.4916% ( 20) 00:07:59.733 36700.160 - 36901.809: 99.5784% ( 7) 00:07:59.733 37103.458 - 37305.108: 99.6404% ( 5) 00:07:59.733 37305.108 - 37506.757: 99.7644% ( 10) 00:07:59.733 37506.757 - 37708.406: 99.8388% ( 6) 00:07:59.733 37708.406 - 37910.055: 100.0000% ( 13) 00:07:59.733 00:07:59.733 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:59.733 ============================================================================== 00:07:59.733 Range in us Cumulative IO count 00:07:59.734 10636.997 - 10687.409: 0.0868% ( 7) 00:07:59.734 10687.409 - 10737.822: 0.2728% ( 15) 00:07:59.734 10737.822 - 10788.234: 0.3596% ( 7) 00:07:59.734 10788.234 - 10838.646: 0.4092% ( 4) 00:07:59.734 10838.646 - 10889.058: 0.4340% ( 2) 00:07:59.734 10889.058 - 10939.471: 0.4588% ( 2) 00:07:59.734 10939.471 - 10989.883: 0.4836% ( 2) 00:07:59.734 10989.883 - 11040.295: 0.5084% ( 2) 00:07:59.734 11040.295 - 11090.708: 0.5208% ( 1) 00:07:59.734 11090.708 - 11141.120: 0.5332% ( 1) 00:07:59.734 11141.120 - 11191.532: 0.5704% ( 3) 00:07:59.734 11191.532 - 11241.945: 0.5952% ( 2) 00:07:59.734 11241.945 - 11292.357: 0.6572% ( 5) 00:07:59.734 11342.769 - 11393.182: 0.7068% ( 4) 00:07:59.734 11393.182 - 11443.594: 0.7316% ( 2) 00:07:59.734 11443.594 - 11494.006: 0.7688% ( 3) 00:07:59.734 11494.006 - 11544.418: 0.7937% ( 2) 00:07:59.734 12250.191 - 12300.603: 0.8433% ( 4) 00:07:59.734 12300.603 - 12351.015: 0.8557% ( 1) 00:07:59.734 12351.015 - 12401.428: 0.8681% ( 1) 00:07:59.734 12401.428 - 12451.840: 0.9053% ( 3) 00:07:59.734 12451.840 - 12502.252: 0.9425% ( 3) 00:07:59.734 12502.252 - 12552.665: 1.0045% ( 5) 00:07:59.734 12552.665 - 12603.077: 1.0913% ( 7) 00:07:59.734 12603.077 - 12653.489: 1.2277% ( 11) 00:07:59.734 12653.489 - 12703.902: 1.3269% ( 8) 00:07:59.734 12703.902 - 12754.314: 1.5253% ( 16) 00:07:59.734 12754.314 - 12804.726: 1.6493% ( 10) 00:07:59.734 12804.726 - 12855.138: 1.6741% ( 2) 00:07:59.734 12855.138 - 12905.551: 1.6989% ( 2) 00:07:59.734 12905.551 - 13006.375: 2.0337% ( 27) 00:07:59.734 13006.375 - 13107.200: 2.5422% ( 41) 00:07:59.734 13107.200 - 13208.025: 3.6706% ( 91) 00:07:59.734 13208.025 - 13308.849: 4.8611% ( 96) 00:07:59.734 13308.849 - 13409.674: 5.8780% ( 82) 00:07:59.734 13409.674 - 13510.498: 6.9072% ( 83) 00:07:59.734 13510.498 - 13611.323: 7.7133% ( 65) 00:07:59.734 13611.323 - 13712.148: 8.6682% ( 77) 00:07:59.734 13712.148 - 13812.972: 10.1935% ( 123) 00:07:59.734 13812.972 - 13913.797: 11.9792% ( 144) 00:07:59.734 13913.797 - 14014.622: 13.7773% ( 145) 00:07:59.734 14014.622 - 14115.446: 15.8234% ( 165) 00:07:59.734 14115.446 - 14216.271: 17.8943% ( 167) 00:07:59.734 14216.271 - 14317.095: 19.7917% ( 153) 00:07:59.734 14317.095 - 14417.920: 22.4578% ( 215) 00:07:59.734 14417.920 - 14518.745: 25.5704% ( 251) 00:07:59.734 14518.745 - 14619.569: 28.0134% ( 197) 00:07:59.734 14619.569 - 14720.394: 30.0967% ( 168) 00:07:59.734 14720.394 - 14821.218: 33.0853% ( 241) 00:07:59.734 14821.218 - 14922.043: 35.4167% ( 188) 00:07:59.734 14922.043 - 15022.868: 38.0952% ( 216) 00:07:59.734 15022.868 - 15123.692: 40.7986% ( 218) 00:07:59.734 15123.692 - 15224.517: 43.3284% ( 204) 00:07:59.734 15224.517 - 15325.342: 45.7589% ( 196) 00:07:59.734 15325.342 - 15426.166: 47.8795% ( 171) 00:07:59.734 15426.166 - 15526.991: 49.9752% ( 169) 00:07:59.734 15526.991 - 15627.815: 52.7034% ( 220) 00:07:59.734 15627.815 - 15728.640: 55.1215% ( 195) 00:07:59.734 15728.640 - 15829.465: 57.1801% ( 166) 00:07:59.734 15829.465 - 15930.289: 59.3750% ( 177) 00:07:59.734 15930.289 - 16031.114: 61.0119% ( 132) 00:07:59.734 16031.114 - 16131.938: 62.9340% ( 155) 00:07:59.734 16131.938 - 16232.763: 64.9306% ( 161) 00:07:59.734 16232.763 - 16333.588: 66.8031% ( 151) 00:07:59.734 16333.588 - 16434.412: 69.6057% ( 226) 00:07:59.734 16434.412 - 16535.237: 70.9821% ( 111) 00:07:59.734 16535.237 - 16636.062: 72.9043% ( 155) 00:07:59.734 16636.062 - 16736.886: 74.5288% ( 131) 00:07:59.734 16736.886 - 16837.711: 76.0417% ( 122) 00:07:59.734 16837.711 - 16938.535: 77.3065% ( 102) 00:07:59.734 16938.535 - 17039.360: 78.7450% ( 116) 00:07:59.734 17039.360 - 17140.185: 80.3323% ( 128) 00:07:59.734 17140.185 - 17241.009: 81.9816% ( 133) 00:07:59.734 17241.009 - 17341.834: 83.2589% ( 103) 00:07:59.734 17341.834 - 17442.658: 84.2758% ( 82) 00:07:59.734 17442.658 - 17543.483: 85.0322% ( 61) 00:07:59.734 17543.483 - 17644.308: 85.7515% ( 58) 00:07:59.734 17644.308 - 17745.132: 86.5327% ( 63) 00:07:59.734 17745.132 - 17845.957: 87.0908% ( 45) 00:07:59.734 17845.957 - 17946.782: 87.8224% ( 59) 00:07:59.734 17946.782 - 18047.606: 88.5789% ( 61) 00:07:59.734 18047.606 - 18148.431: 89.5461% ( 78) 00:07:59.734 18148.431 - 18249.255: 90.9102% ( 110) 00:07:59.734 18249.255 - 18350.080: 91.8527% ( 76) 00:07:59.734 18350.080 - 18450.905: 92.4603% ( 49) 00:07:59.734 18450.905 - 18551.729: 93.0060% ( 44) 00:07:59.734 18551.729 - 18652.554: 93.7004% ( 56) 00:07:59.734 18652.554 - 18753.378: 94.1468% ( 36) 00:07:59.734 18753.378 - 18854.203: 94.7793% ( 51) 00:07:59.734 18854.203 - 18955.028: 95.3745% ( 48) 00:07:59.734 18955.028 - 19055.852: 95.8085% ( 35) 00:07:59.734 19055.852 - 19156.677: 96.1558% ( 28) 00:07:59.734 19156.677 - 19257.502: 96.4410% ( 23) 00:07:59.734 19257.502 - 19358.326: 96.5526% ( 9) 00:07:59.734 19358.326 - 19459.151: 96.6890% ( 11) 00:07:59.734 19459.151 - 19559.975: 96.7510% ( 5) 00:07:59.734 19559.975 - 19660.800: 96.8006% ( 4) 00:07:59.734 19660.800 - 19761.625: 96.8254% ( 2) 00:07:59.734 19963.274 - 20064.098: 96.8626% ( 3) 00:07:59.734 20064.098 - 20164.923: 97.0610% ( 16) 00:07:59.734 20164.923 - 20265.748: 97.1974% ( 11) 00:07:59.734 20265.748 - 20366.572: 97.3090% ( 9) 00:07:59.734 20366.572 - 20467.397: 97.3462% ( 3) 00:07:59.734 20467.397 - 20568.222: 97.6811% ( 27) 00:07:59.734 20568.222 - 20669.046: 97.8919% ( 17) 00:07:59.734 20669.046 - 20769.871: 97.9415% ( 4) 00:07:59.734 20769.871 - 20870.695: 98.0159% ( 6) 00:07:59.734 20870.695 - 20971.520: 98.0655% ( 4) 00:07:59.734 20971.520 - 21072.345: 98.1151% ( 4) 00:07:59.734 21072.345 - 21173.169: 98.2019% ( 7) 00:07:59.734 21173.169 - 21273.994: 98.2887% ( 7) 00:07:59.734 21273.994 - 21374.818: 98.3507% ( 5) 00:07:59.734 21374.818 - 21475.643: 98.4003% ( 4) 00:07:59.734 21475.643 - 21576.468: 98.4127% ( 1) 00:07:59.734 26012.751 - 26214.400: 98.4995% ( 7) 00:07:59.734 26214.400 - 26416.049: 98.6111% ( 9) 00:07:59.734 26416.049 - 26617.698: 98.6979% ( 7) 00:07:59.734 26617.698 - 26819.348: 98.8095% ( 9) 00:07:59.734 26819.348 - 27020.997: 98.9087% ( 8) 00:07:59.734 27020.997 - 27222.646: 99.0079% ( 8) 00:07:59.734 27222.646 - 27424.295: 99.1319% ( 10) 00:07:59.734 27424.295 - 27625.945: 99.2063% ( 6) 00:07:59.734 36700.160 - 36901.809: 99.2312% ( 2) 00:07:59.734 36901.809 - 37103.458: 99.3304% ( 8) 00:07:59.734 37103.458 - 37305.108: 99.4296% ( 8) 00:07:59.734 37305.108 - 37506.757: 99.4916% ( 5) 00:07:59.734 37506.757 - 37708.406: 99.5908% ( 8) 00:07:59.734 37708.406 - 37910.055: 99.7024% ( 9) 00:07:59.734 37910.055 - 38111.705: 99.8264% ( 10) 00:07:59.734 38111.705 - 38313.354: 99.9504% ( 10) 00:07:59.734 38313.354 - 38515.003: 100.0000% ( 4) 00:07:59.734 00:07:59.734 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:59.734 ============================================================================== 00:07:59.734 Range in us Cumulative IO count 00:07:59.734 9578.338 - 9628.751: 0.0124% ( 1) 00:07:59.734 9628.751 - 9679.163: 0.0744% ( 5) 00:07:59.734 9679.163 - 9729.575: 0.1736% ( 8) 00:07:59.734 9729.575 - 9779.988: 0.2604% ( 7) 00:07:59.734 9779.988 - 9830.400: 0.4092% ( 12) 00:07:59.734 9830.400 - 9880.812: 0.5580% ( 12) 00:07:59.734 9880.812 - 9931.225: 0.6324% ( 6) 00:07:59.734 9931.225 - 9981.637: 0.6944% ( 5) 00:07:59.734 9981.637 - 10032.049: 0.7316% ( 3) 00:07:59.734 10032.049 - 10082.462: 0.7688% ( 3) 00:07:59.734 10082.462 - 10132.874: 0.7937% ( 2) 00:07:59.734 12653.489 - 12703.902: 0.8061% ( 1) 00:07:59.734 12754.314 - 12804.726: 0.8805% ( 6) 00:07:59.734 12804.726 - 12855.138: 0.9797% ( 8) 00:07:59.734 12855.138 - 12905.551: 1.0789% ( 8) 00:07:59.734 12905.551 - 13006.375: 1.3517% ( 22) 00:07:59.734 13006.375 - 13107.200: 1.7609% ( 33) 00:07:59.734 13107.200 - 13208.025: 2.1577% ( 32) 00:07:59.734 13208.025 - 13308.849: 2.9390% ( 63) 00:07:59.734 13308.849 - 13409.674: 3.6582% ( 58) 00:07:59.734 13409.674 - 13510.498: 4.6007% ( 76) 00:07:59.734 13510.498 - 13611.323: 5.9276% ( 107) 00:07:59.734 13611.323 - 13712.148: 7.8869% ( 158) 00:07:59.734 13712.148 - 13812.972: 10.2059% ( 187) 00:07:59.734 13812.972 - 13913.797: 12.3512% ( 173) 00:07:59.734 13913.797 - 14014.622: 14.1617% ( 146) 00:07:59.734 14014.622 - 14115.446: 16.0466% ( 152) 00:07:59.734 14115.446 - 14216.271: 17.9439% ( 153) 00:07:59.734 14216.271 - 14317.095: 19.7917% ( 149) 00:07:59.734 14317.095 - 14417.920: 21.8254% ( 164) 00:07:59.734 14417.920 - 14518.745: 23.9707% ( 173) 00:07:59.734 14518.745 - 14619.569: 26.4509% ( 200) 00:07:59.734 14619.569 - 14720.394: 28.7202% ( 183) 00:07:59.734 14720.394 - 14821.218: 31.1260% ( 194) 00:07:59.734 14821.218 - 14922.043: 33.6186% ( 201) 00:07:59.734 14922.043 - 15022.868: 35.7143% ( 169) 00:07:59.734 15022.868 - 15123.692: 38.2068% ( 201) 00:07:59.734 15123.692 - 15224.517: 40.7862% ( 208) 00:07:59.734 15224.517 - 15325.342: 43.5640% ( 224) 00:07:59.734 15325.342 - 15426.166: 47.1602% ( 290) 00:07:59.734 15426.166 - 15526.991: 49.9380% ( 224) 00:07:59.734 15526.991 - 15627.815: 52.5670% ( 212) 00:07:59.734 15627.815 - 15728.640: 55.5184% ( 238) 00:07:59.734 15728.640 - 15829.465: 58.7054% ( 257) 00:07:59.734 15829.465 - 15930.289: 61.5823% ( 232) 00:07:59.734 15930.289 - 16031.114: 63.9013% ( 187) 00:07:59.734 16031.114 - 16131.938: 65.9970% ( 169) 00:07:59.734 16131.938 - 16232.763: 67.3859% ( 112) 00:07:59.734 16232.763 - 16333.588: 68.8616% ( 119) 00:07:59.734 16333.588 - 16434.412: 70.8333% ( 159) 00:07:59.735 16434.412 - 16535.237: 72.6562% ( 147) 00:07:59.735 16535.237 - 16636.062: 74.6156% ( 158) 00:07:59.735 16636.062 - 16736.886: 76.4385% ( 147) 00:07:59.735 16736.886 - 16837.711: 78.2118% ( 143) 00:07:59.735 16837.711 - 16938.535: 79.5139% ( 105) 00:07:59.735 16938.535 - 17039.360: 80.5060% ( 80) 00:07:59.735 17039.360 - 17140.185: 81.4360% ( 75) 00:07:59.735 17140.185 - 17241.009: 82.4777% ( 84) 00:07:59.735 17241.009 - 17341.834: 83.2961% ( 66) 00:07:59.735 17341.834 - 17442.658: 83.8294% ( 43) 00:07:59.735 17442.658 - 17543.483: 84.4618% ( 51) 00:07:59.735 17543.483 - 17644.308: 85.2679% ( 65) 00:07:59.735 17644.308 - 17745.132: 86.1111% ( 68) 00:07:59.735 17745.132 - 17845.957: 87.0164% ( 73) 00:07:59.735 17845.957 - 17946.782: 88.1324% ( 90) 00:07:59.735 17946.782 - 18047.606: 88.9509% ( 66) 00:07:59.735 18047.606 - 18148.431: 90.1166% ( 94) 00:07:59.735 18148.431 - 18249.255: 91.0094% ( 72) 00:07:59.735 18249.255 - 18350.080: 91.8403% ( 67) 00:07:59.735 18350.080 - 18450.905: 92.8323% ( 80) 00:07:59.735 18450.905 - 18551.729: 93.9980% ( 94) 00:07:59.735 18551.729 - 18652.554: 94.7793% ( 63) 00:07:59.735 18652.554 - 18753.378: 95.3621% ( 47) 00:07:59.735 18753.378 - 18854.203: 95.7093% ( 28) 00:07:59.735 18854.203 - 18955.028: 95.9325% ( 18) 00:07:59.735 18955.028 - 19055.852: 96.1806% ( 20) 00:07:59.735 19055.852 - 19156.677: 96.5154% ( 27) 00:07:59.735 19156.677 - 19257.502: 96.6518% ( 11) 00:07:59.735 19257.502 - 19358.326: 96.7634% ( 9) 00:07:59.735 19358.326 - 19459.151: 96.8130% ( 4) 00:07:59.735 19459.151 - 19559.975: 96.8874% ( 6) 00:07:59.735 19559.975 - 19660.800: 96.9618% ( 6) 00:07:59.735 19660.800 - 19761.625: 97.0362% ( 6) 00:07:59.735 19761.625 - 19862.449: 97.1850% ( 12) 00:07:59.735 19862.449 - 19963.274: 97.3710% ( 15) 00:07:59.735 19963.274 - 20064.098: 97.4578% ( 7) 00:07:59.735 20064.098 - 20164.923: 97.5322% ( 6) 00:07:59.735 20164.923 - 20265.748: 97.5818% ( 4) 00:07:59.735 20265.748 - 20366.572: 97.6190% ( 3) 00:07:59.735 20467.397 - 20568.222: 97.6438% ( 2) 00:07:59.735 20568.222 - 20669.046: 97.7183% ( 6) 00:07:59.735 20669.046 - 20769.871: 97.7679% ( 4) 00:07:59.735 20769.871 - 20870.695: 97.8299% ( 5) 00:07:59.735 20870.695 - 20971.520: 97.9043% ( 6) 00:07:59.735 20971.520 - 21072.345: 97.9663% ( 5) 00:07:59.735 21072.345 - 21173.169: 98.1151% ( 12) 00:07:59.735 21173.169 - 21273.994: 98.1647% ( 4) 00:07:59.735 21273.994 - 21374.818: 98.2267% ( 5) 00:07:59.735 21374.818 - 21475.643: 98.3259% ( 8) 00:07:59.735 21475.643 - 21576.468: 98.3879% ( 5) 00:07:59.735 21576.468 - 21677.292: 98.4127% ( 2) 00:07:59.735 25811.102 - 26012.751: 98.5119% ( 8) 00:07:59.735 26012.751 - 26214.400: 98.6359% ( 10) 00:07:59.735 26214.400 - 26416.049: 98.7599% ( 10) 00:07:59.735 26416.049 - 26617.698: 98.8839% ( 10) 00:07:59.735 26617.698 - 26819.348: 98.9955% ( 9) 00:07:59.735 26819.348 - 27020.997: 99.1071% ( 9) 00:07:59.735 27020.997 - 27222.646: 99.2063% ( 8) 00:07:59.735 36700.160 - 36901.809: 99.2932% ( 7) 00:07:59.735 36901.809 - 37103.458: 99.4296% ( 11) 00:07:59.735 37103.458 - 37305.108: 99.5660% ( 11) 00:07:59.735 37305.108 - 37506.757: 99.6776% ( 9) 00:07:59.735 37506.757 - 37708.406: 99.8016% ( 10) 00:07:59.735 37708.406 - 37910.055: 99.9132% ( 9) 00:07:59.735 37910.055 - 38111.705: 100.0000% ( 7) 00:07:59.735 00:07:59.735 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:59.735 ============================================================================== 00:07:59.735 Range in us Cumulative IO count 00:07:59.735 7612.258 - 7662.671: 0.0372% ( 3) 00:07:59.735 7662.671 - 7713.083: 0.0744% ( 3) 00:07:59.735 7713.083 - 7763.495: 0.1116% ( 3) 00:07:59.735 7763.495 - 7813.908: 0.1736% ( 5) 00:07:59.735 7813.908 - 7864.320: 0.3968% ( 18) 00:07:59.735 7864.320 - 7914.732: 0.5084% ( 9) 00:07:59.735 7914.732 - 7965.145: 0.5456% ( 3) 00:07:59.735 7965.145 - 8015.557: 0.5952% ( 4) 00:07:59.735 8015.557 - 8065.969: 0.6324% ( 3) 00:07:59.735 8065.969 - 8116.382: 0.6820% ( 4) 00:07:59.735 8116.382 - 8166.794: 0.7192% ( 3) 00:07:59.735 8166.794 - 8217.206: 0.7688% ( 4) 00:07:59.735 8217.206 - 8267.618: 0.7937% ( 2) 00:07:59.735 12502.252 - 12552.665: 0.8309% ( 3) 00:07:59.735 12552.665 - 12603.077: 0.8929% ( 5) 00:07:59.735 12603.077 - 12653.489: 1.0417% ( 12) 00:07:59.735 12653.489 - 12703.902: 1.2277% ( 15) 00:07:59.735 12703.902 - 12754.314: 1.4509% ( 18) 00:07:59.735 12754.314 - 12804.726: 1.7733% ( 26) 00:07:59.735 12804.726 - 12855.138: 1.8601% ( 7) 00:07:59.735 12855.138 - 12905.551: 1.9965% ( 11) 00:07:59.735 12905.551 - 13006.375: 2.3686% ( 30) 00:07:59.735 13006.375 - 13107.200: 2.9886% ( 50) 00:07:59.735 13107.200 - 13208.025: 3.4474% ( 37) 00:07:59.735 13208.025 - 13308.849: 4.0551% ( 49) 00:07:59.735 13308.849 - 13409.674: 4.6627% ( 49) 00:07:59.735 13409.674 - 13510.498: 5.6920% ( 83) 00:07:59.735 13510.498 - 13611.323: 6.7832% ( 88) 00:07:59.735 13611.323 - 13712.148: 7.8993% ( 90) 00:07:59.735 13712.148 - 13812.972: 9.7346% ( 148) 00:07:59.735 13812.972 - 13913.797: 11.3219% ( 128) 00:07:59.735 13913.797 - 14014.622: 13.0084% ( 136) 00:07:59.735 14014.622 - 14115.446: 14.8438% ( 148) 00:07:59.735 14115.446 - 14216.271: 16.9519% ( 170) 00:07:59.735 14216.271 - 14317.095: 19.4196% ( 199) 00:07:59.735 14317.095 - 14417.920: 22.0982% ( 216) 00:07:59.735 14417.920 - 14518.745: 24.4544% ( 190) 00:07:59.735 14518.745 - 14619.569: 26.6741% ( 179) 00:07:59.735 14619.569 - 14720.394: 28.9435% ( 183) 00:07:59.735 14720.394 - 14821.218: 31.3988% ( 198) 00:07:59.735 14821.218 - 14922.043: 34.9206% ( 284) 00:07:59.735 14922.043 - 15022.868: 37.3140% ( 193) 00:07:59.735 15022.868 - 15123.692: 39.6825% ( 191) 00:07:59.735 15123.692 - 15224.517: 42.5719% ( 233) 00:07:59.735 15224.517 - 15325.342: 45.0769% ( 202) 00:07:59.735 15325.342 - 15426.166: 47.5198% ( 197) 00:07:59.735 15426.166 - 15526.991: 49.6652% ( 173) 00:07:59.735 15526.991 - 15627.815: 52.3189% ( 214) 00:07:59.735 15627.815 - 15728.640: 55.4688% ( 254) 00:07:59.735 15728.640 - 15829.465: 58.4945% ( 244) 00:07:59.735 15829.465 - 15930.289: 61.2971% ( 226) 00:07:59.735 15930.289 - 16031.114: 64.4593% ( 255) 00:07:59.735 16031.114 - 16131.938: 66.9519% ( 201) 00:07:59.735 16131.938 - 16232.763: 69.5685% ( 211) 00:07:59.735 16232.763 - 16333.588: 71.6642% ( 169) 00:07:59.735 16333.588 - 16434.412: 73.5615% ( 153) 00:07:59.735 16434.412 - 16535.237: 74.8884% ( 107) 00:07:59.735 16535.237 - 16636.062: 76.0541% ( 94) 00:07:59.735 16636.062 - 16736.886: 77.1081% ( 85) 00:07:59.735 16736.886 - 16837.711: 78.2118% ( 89) 00:07:59.735 16837.711 - 16938.535: 79.2039% ( 80) 00:07:59.735 16938.535 - 17039.360: 80.3199% ( 90) 00:07:59.735 17039.360 - 17140.185: 81.2748% ( 77) 00:07:59.735 17140.185 - 17241.009: 81.9940% ( 58) 00:07:59.735 17241.009 - 17341.834: 83.1845% ( 96) 00:07:59.735 17341.834 - 17442.658: 84.4122% ( 99) 00:07:59.735 17442.658 - 17543.483: 85.5283% ( 90) 00:07:59.735 17543.483 - 17644.308: 86.4087% ( 71) 00:07:59.735 17644.308 - 17745.132: 87.2024% ( 64) 00:07:59.735 17745.132 - 17845.957: 88.0828% ( 71) 00:07:59.735 17845.957 - 17946.782: 88.8765% ( 64) 00:07:59.735 17946.782 - 18047.606: 89.5337% ( 53) 00:07:59.735 18047.606 - 18148.431: 90.1414% ( 49) 00:07:59.735 18148.431 - 18249.255: 90.8606% ( 58) 00:07:59.735 18249.255 - 18350.080: 91.7287% ( 70) 00:07:59.735 18350.080 - 18450.905: 92.3735% ( 52) 00:07:59.735 18450.905 - 18551.729: 92.9812% ( 49) 00:07:59.735 18551.729 - 18652.554: 93.6260% ( 52) 00:07:59.735 18652.554 - 18753.378: 94.4072% ( 63) 00:07:59.735 18753.378 - 18854.203: 94.9157% ( 41) 00:07:59.735 18854.203 - 18955.028: 95.2753% ( 29) 00:07:59.735 18955.028 - 19055.852: 95.5729% ( 24) 00:07:59.735 19055.852 - 19156.677: 95.8953% ( 26) 00:07:59.735 19156.677 - 19257.502: 96.2674% ( 30) 00:07:59.735 19257.502 - 19358.326: 96.4534% ( 15) 00:07:59.735 19358.326 - 19459.151: 96.5898% ( 11) 00:07:59.735 19459.151 - 19559.975: 96.7138% ( 10) 00:07:59.735 19559.975 - 19660.800: 96.8750% ( 13) 00:07:59.735 19660.800 - 19761.625: 97.0114% ( 11) 00:07:59.735 19761.625 - 19862.449: 97.1602% ( 12) 00:07:59.735 19862.449 - 19963.274: 97.3338% ( 14) 00:07:59.735 19963.274 - 20064.098: 97.3834% ( 4) 00:07:59.735 20064.098 - 20164.923: 97.4330% ( 4) 00:07:59.735 20164.923 - 20265.748: 97.4826% ( 4) 00:07:59.735 20265.748 - 20366.572: 97.5322% ( 4) 00:07:59.735 20366.572 - 20467.397: 97.5942% ( 5) 00:07:59.735 20467.397 - 20568.222: 97.6190% ( 2) 00:07:59.735 20971.520 - 21072.345: 97.6562% ( 3) 00:07:59.735 21072.345 - 21173.169: 97.7307% ( 6) 00:07:59.735 21173.169 - 21273.994: 97.8175% ( 7) 00:07:59.735 21273.994 - 21374.818: 97.8671% ( 4) 00:07:59.735 21374.818 - 21475.643: 97.9415% ( 6) 00:07:59.735 21475.643 - 21576.468: 98.0655% ( 10) 00:07:59.735 21576.468 - 21677.292: 98.1771% ( 9) 00:07:59.735 21677.292 - 21778.117: 98.2763% ( 8) 00:07:59.735 21778.117 - 21878.942: 98.3259% ( 4) 00:07:59.735 21878.942 - 21979.766: 98.3755% ( 4) 00:07:59.736 21979.766 - 22080.591: 98.4127% ( 3) 00:07:59.736 26617.698 - 26819.348: 98.4995% ( 7) 00:07:59.736 26819.348 - 27020.997: 98.6359% ( 11) 00:07:59.736 27020.997 - 27222.646: 98.7723% ( 11) 00:07:59.736 27222.646 - 27424.295: 98.8963% ( 10) 00:07:59.736 27424.295 - 27625.945: 99.0327% ( 11) 00:07:59.736 27625.945 - 27827.594: 99.1567% ( 10) 00:07:59.736 27827.594 - 28029.243: 99.2063% ( 4) 00:07:59.736 38111.705 - 38313.354: 99.2932% ( 7) 00:07:59.736 38313.354 - 38515.003: 99.3924% ( 8) 00:07:59.736 38515.003 - 38716.652: 99.4792% ( 7) 00:07:59.736 38716.652 - 38918.302: 99.5660% ( 7) 00:07:59.736 38918.302 - 39119.951: 99.6528% ( 7) 00:07:59.736 39119.951 - 39321.600: 99.7520% ( 8) 00:07:59.736 39321.600 - 39523.249: 99.8884% ( 11) 00:07:59.736 39523.249 - 39724.898: 100.0000% ( 9) 00:07:59.736 00:07:59.736 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:59.736 ============================================================================== 00:07:59.736 Range in us Cumulative IO count 00:07:59.736 7007.311 - 7057.723: 0.0124% ( 1) 00:07:59.736 7057.723 - 7108.135: 0.0744% ( 5) 00:07:59.736 7108.135 - 7158.548: 0.1736% ( 8) 00:07:59.736 7158.548 - 7208.960: 0.2604% ( 7) 00:07:59.736 7208.960 - 7259.372: 0.4712% ( 17) 00:07:59.736 7259.372 - 7309.785: 0.5704% ( 8) 00:07:59.736 7309.785 - 7360.197: 0.6200% ( 4) 00:07:59.736 7360.197 - 7410.609: 0.6696% ( 4) 00:07:59.736 7410.609 - 7461.022: 0.7068% ( 3) 00:07:59.736 7461.022 - 7511.434: 0.7440% ( 3) 00:07:59.736 7511.434 - 7561.846: 0.7688% ( 2) 00:07:59.736 7561.846 - 7612.258: 0.7937% ( 2) 00:07:59.736 11897.305 - 11947.717: 0.8185% ( 2) 00:07:59.736 11947.717 - 11998.129: 0.8309% ( 1) 00:07:59.736 11998.129 - 12048.542: 0.8681% ( 3) 00:07:59.736 12048.542 - 12098.954: 0.9673% ( 8) 00:07:59.736 12098.954 - 12149.366: 1.0045% ( 3) 00:07:59.736 12149.366 - 12199.778: 1.0665% ( 5) 00:07:59.736 12199.778 - 12250.191: 1.2277% ( 13) 00:07:59.736 12250.191 - 12300.603: 1.3145% ( 7) 00:07:59.736 12300.603 - 12351.015: 1.3641% ( 4) 00:07:59.736 12351.015 - 12401.428: 1.4013% ( 3) 00:07:59.736 12401.428 - 12451.840: 1.4137% ( 1) 00:07:59.736 12451.840 - 12502.252: 1.4385% ( 2) 00:07:59.736 12502.252 - 12552.665: 1.4633% ( 2) 00:07:59.736 12552.665 - 12603.077: 1.4881% ( 2) 00:07:59.736 12603.077 - 12653.489: 1.5129% ( 2) 00:07:59.736 12653.489 - 12703.902: 1.5501% ( 3) 00:07:59.736 12703.902 - 12754.314: 1.5997% ( 4) 00:07:59.736 12754.314 - 12804.726: 1.6245% ( 2) 00:07:59.736 12855.138 - 12905.551: 1.6617% ( 3) 00:07:59.736 12905.551 - 13006.375: 1.8601% ( 16) 00:07:59.736 13006.375 - 13107.200: 2.2941% ( 35) 00:07:59.736 13107.200 - 13208.025: 3.1126% ( 66) 00:07:59.736 13208.025 - 13308.849: 3.9187% ( 65) 00:07:59.736 13308.849 - 13409.674: 4.7123% ( 64) 00:07:59.736 13409.674 - 13510.498: 5.8656% ( 93) 00:07:59.736 13510.498 - 13611.323: 6.8824% ( 82) 00:07:59.736 13611.323 - 13712.148: 8.0853% ( 97) 00:07:59.736 13712.148 - 13812.972: 9.5982% ( 122) 00:07:59.736 13812.972 - 13913.797: 11.3591% ( 142) 00:07:59.736 13913.797 - 14014.622: 13.4425% ( 168) 00:07:59.736 14014.622 - 14115.446: 15.3646% ( 155) 00:07:59.736 14115.446 - 14216.271: 17.5843% ( 179) 00:07:59.736 14216.271 - 14317.095: 19.9653% ( 192) 00:07:59.736 14317.095 - 14417.920: 22.5322% ( 207) 00:07:59.736 14417.920 - 14518.745: 25.3844% ( 230) 00:07:59.736 14518.745 - 14619.569: 28.1622% ( 224) 00:07:59.736 14619.569 - 14720.394: 30.6176% ( 198) 00:07:59.736 14720.394 - 14821.218: 33.2589% ( 213) 00:07:59.736 14821.218 - 14922.043: 35.8507% ( 209) 00:07:59.736 14922.043 - 15022.868: 38.2316% ( 192) 00:07:59.736 15022.868 - 15123.692: 40.6870% ( 198) 00:07:59.736 15123.692 - 15224.517: 42.9936% ( 186) 00:07:59.736 15224.517 - 15325.342: 45.0645% ( 167) 00:07:59.736 15325.342 - 15426.166: 47.2098% ( 173) 00:07:59.736 15426.166 - 15526.991: 49.4668% ( 182) 00:07:59.736 15526.991 - 15627.815: 51.6245% ( 174) 00:07:59.736 15627.815 - 15728.640: 53.5590% ( 156) 00:07:59.736 15728.640 - 15829.465: 55.7540% ( 177) 00:07:59.736 15829.465 - 15930.289: 58.6310% ( 232) 00:07:59.736 15930.289 - 16031.114: 61.3591% ( 220) 00:07:59.736 16031.114 - 16131.938: 64.6453% ( 265) 00:07:59.736 16131.938 - 16232.763: 67.8075% ( 255) 00:07:59.736 16232.763 - 16333.588: 70.4365% ( 212) 00:07:59.736 16333.588 - 16434.412: 72.8423% ( 194) 00:07:59.736 16434.412 - 16535.237: 74.5660% ( 139) 00:07:59.736 16535.237 - 16636.062: 76.5377% ( 159) 00:07:59.736 16636.062 - 16736.886: 78.1126% ( 127) 00:07:59.736 16736.886 - 16837.711: 79.5883% ( 119) 00:07:59.736 16837.711 - 16938.535: 80.4563% ( 70) 00:07:59.736 16938.535 - 17039.360: 81.2376% ( 63) 00:07:59.736 17039.360 - 17140.185: 82.1429% ( 73) 00:07:59.736 17140.185 - 17241.009: 83.3705% ( 99) 00:07:59.736 17241.009 - 17341.834: 84.3130% ( 76) 00:07:59.736 17341.834 - 17442.658: 85.1314% ( 66) 00:07:59.736 17442.658 - 17543.483: 85.9251% ( 64) 00:07:59.736 17543.483 - 17644.308: 86.6443% ( 58) 00:07:59.736 17644.308 - 17745.132: 87.7976% ( 93) 00:07:59.736 17745.132 - 17845.957: 88.8021% ( 81) 00:07:59.736 17845.957 - 17946.782: 89.7941% ( 80) 00:07:59.736 17946.782 - 18047.606: 90.6622% ( 70) 00:07:59.736 18047.606 - 18148.431: 91.8279% ( 94) 00:07:59.736 18148.431 - 18249.255: 92.5595% ( 59) 00:07:59.736 18249.255 - 18350.080: 93.1300% ( 46) 00:07:59.736 18350.080 - 18450.905: 93.5144% ( 31) 00:07:59.736 18450.905 - 18551.729: 93.8864% ( 30) 00:07:59.736 18551.729 - 18652.554: 94.3700% ( 39) 00:07:59.736 18652.554 - 18753.378: 94.7793% ( 33) 00:07:59.736 18753.378 - 18854.203: 95.1885% ( 33) 00:07:59.736 18854.203 - 18955.028: 95.6845% ( 40) 00:07:59.736 18955.028 - 19055.852: 96.1806% ( 40) 00:07:59.736 19055.852 - 19156.677: 96.3418% ( 13) 00:07:59.736 19156.677 - 19257.502: 96.4534% ( 9) 00:07:59.736 19257.502 - 19358.326: 96.5278% ( 6) 00:07:59.736 19358.326 - 19459.151: 96.6394% ( 9) 00:07:59.736 19459.151 - 19559.975: 96.7386% ( 8) 00:07:59.736 19559.975 - 19660.800: 96.8006% ( 5) 00:07:59.736 19660.800 - 19761.625: 96.8254% ( 2) 00:07:59.736 19963.274 - 20064.098: 96.8626% ( 3) 00:07:59.736 20064.098 - 20164.923: 96.9370% ( 6) 00:07:59.736 20164.923 - 20265.748: 97.0362% ( 8) 00:07:59.736 20265.748 - 20366.572: 97.1230% ( 7) 00:07:59.736 20366.572 - 20467.397: 97.1974% ( 6) 00:07:59.736 20467.397 - 20568.222: 97.2594% ( 5) 00:07:59.736 20568.222 - 20669.046: 97.3090% ( 4) 00:07:59.736 20669.046 - 20769.871: 97.4206% ( 9) 00:07:59.736 20769.871 - 20870.695: 97.5446% ( 10) 00:07:59.736 20870.695 - 20971.520: 97.6935% ( 12) 00:07:59.736 20971.520 - 21072.345: 97.8547% ( 13) 00:07:59.736 21072.345 - 21173.169: 97.9911% ( 11) 00:07:59.736 21173.169 - 21273.994: 98.1647% ( 14) 00:07:59.736 21273.994 - 21374.818: 98.2267% ( 5) 00:07:59.736 21374.818 - 21475.643: 98.2887% ( 5) 00:07:59.736 21475.643 - 21576.468: 98.3507% ( 5) 00:07:59.736 21576.468 - 21677.292: 98.4003% ( 4) 00:07:59.736 21677.292 - 21778.117: 98.4127% ( 1) 00:07:59.736 27020.997 - 27222.646: 98.4375% ( 2) 00:07:59.736 27222.646 - 27424.295: 98.5615% ( 10) 00:07:59.736 27424.295 - 27625.945: 98.6855% ( 10) 00:07:59.736 27625.945 - 27827.594: 98.8095% ( 10) 00:07:59.736 27827.594 - 28029.243: 98.9335% ( 10) 00:07:59.736 28029.243 - 28230.892: 99.0575% ( 10) 00:07:59.736 28230.892 - 28432.542: 99.1939% ( 11) 00:07:59.736 28432.542 - 28634.191: 99.2063% ( 1) 00:07:59.736 38111.705 - 38313.354: 99.2188% ( 1) 00:07:59.736 38313.354 - 38515.003: 99.3428% ( 10) 00:07:59.736 38515.003 - 38716.652: 99.4668% ( 10) 00:07:59.736 38716.652 - 38918.302: 99.5784% ( 9) 00:07:59.736 38918.302 - 39119.951: 99.6900% ( 9) 00:07:59.736 39119.951 - 39321.600: 99.8016% ( 9) 00:07:59.736 39321.600 - 39523.249: 99.9256% ( 10) 00:07:59.736 39523.249 - 39724.898: 100.0000% ( 6) 00:07:59.736 00:07:59.736 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:59.736 ============================================================================== 00:07:59.736 Range in us Cumulative IO count 00:07:59.736 6049.477 - 6074.683: 0.0246% ( 2) 00:07:59.736 6074.683 - 6099.889: 0.0369% ( 1) 00:07:59.736 6099.889 - 6125.095: 0.0738% ( 3) 00:07:59.736 6125.095 - 6150.302: 0.0984% ( 2) 00:07:59.736 6150.302 - 6175.508: 0.1230% ( 2) 00:07:59.736 6175.508 - 6200.714: 0.1476% ( 2) 00:07:59.736 6200.714 - 6225.920: 0.1845% ( 3) 00:07:59.736 6225.920 - 6251.126: 0.4060% ( 18) 00:07:59.736 6251.126 - 6276.332: 0.5044% ( 8) 00:07:59.736 6276.332 - 6301.538: 0.5536% ( 4) 00:07:59.736 6301.538 - 6326.745: 0.5782% ( 2) 00:07:59.736 6326.745 - 6351.951: 0.6029% ( 2) 00:07:59.737 6351.951 - 6377.157: 0.6152% ( 1) 00:07:59.737 6377.157 - 6402.363: 0.6398% ( 2) 00:07:59.737 6402.363 - 6427.569: 0.6644% ( 2) 00:07:59.737 6427.569 - 6452.775: 0.6890% ( 2) 00:07:59.737 6452.775 - 6503.188: 0.7259% ( 3) 00:07:59.737 6503.188 - 6553.600: 0.7628% ( 3) 00:07:59.737 6553.600 - 6604.012: 0.7874% ( 2) 00:07:59.737 12098.954 - 12149.366: 0.7997% ( 1) 00:07:59.737 12149.366 - 12199.778: 0.8120% ( 1) 00:07:59.737 12199.778 - 12250.191: 0.8735% ( 5) 00:07:59.737 12250.191 - 12300.603: 0.9104% ( 3) 00:07:59.737 12300.603 - 12351.015: 1.0212% ( 9) 00:07:59.737 12351.015 - 12401.428: 1.1565% ( 11) 00:07:59.737 12401.428 - 12451.840: 1.3410% ( 15) 00:07:59.737 12451.840 - 12502.252: 1.6855% ( 28) 00:07:59.737 12502.252 - 12552.665: 1.8332% ( 12) 00:07:59.737 12552.665 - 12603.077: 1.9685% ( 11) 00:07:59.737 12603.077 - 12653.489: 2.0423% ( 6) 00:07:59.737 12653.489 - 12703.902: 2.1284% ( 7) 00:07:59.737 12703.902 - 12754.314: 2.2146% ( 7) 00:07:59.737 12754.314 - 12804.726: 2.3253% ( 9) 00:07:59.737 12804.726 - 12855.138: 2.3868% ( 5) 00:07:59.737 12855.138 - 12905.551: 2.4975% ( 9) 00:07:59.737 12905.551 - 13006.375: 2.7559% ( 21) 00:07:59.737 13006.375 - 13107.200: 3.0020% ( 20) 00:07:59.737 13107.200 - 13208.025: 3.6663% ( 54) 00:07:59.737 13208.025 - 13308.849: 4.2446% ( 47) 00:07:59.737 13308.849 - 13409.674: 4.7613% ( 42) 00:07:59.737 13409.674 - 13510.498: 5.7087% ( 77) 00:07:59.737 13510.498 - 13611.323: 6.6560% ( 77) 00:07:59.737 13611.323 - 13712.148: 7.6280% ( 79) 00:07:59.737 13712.148 - 13812.972: 8.8091% ( 96) 00:07:59.737 13812.972 - 13913.797: 10.2608% ( 118) 00:07:59.737 13913.797 - 14014.622: 12.1432% ( 153) 00:07:59.737 14014.622 - 14115.446: 14.2224% ( 169) 00:07:59.737 14115.446 - 14216.271: 15.9080% ( 137) 00:07:59.737 14216.271 - 14317.095: 18.2456% ( 190) 00:07:59.737 14317.095 - 14417.920: 21.1860% ( 239) 00:07:59.737 14417.920 - 14518.745: 23.7205% ( 206) 00:07:59.737 14518.745 - 14619.569: 26.7101% ( 243) 00:07:59.737 14619.569 - 14720.394: 29.2569% ( 207) 00:07:59.737 14720.394 - 14821.218: 32.0620% ( 228) 00:07:59.737 14821.218 - 14922.043: 34.9040% ( 231) 00:07:59.737 14922.043 - 15022.868: 37.6599% ( 224) 00:07:59.737 15022.868 - 15123.692: 40.9203% ( 265) 00:07:59.737 15123.692 - 15224.517: 43.1471% ( 181) 00:07:59.737 15224.517 - 15325.342: 45.6570% ( 204) 00:07:59.737 15325.342 - 15426.166: 48.2037% ( 207) 00:07:59.737 15426.166 - 15526.991: 50.3814% ( 177) 00:07:59.737 15526.991 - 15627.815: 52.9774% ( 211) 00:07:59.737 15627.815 - 15728.640: 55.8317% ( 232) 00:07:59.737 15728.640 - 15829.465: 58.4892% ( 216) 00:07:59.737 15829.465 - 15930.289: 61.0974% ( 212) 00:07:59.737 15930.289 - 16031.114: 63.3120% ( 180) 00:07:59.737 16031.114 - 16131.938: 65.4158% ( 171) 00:07:59.737 16131.938 - 16232.763: 67.9257% ( 204) 00:07:59.737 16232.763 - 16333.588: 70.4847% ( 208) 00:07:59.737 16333.588 - 16434.412: 72.2441% ( 143) 00:07:59.737 16434.412 - 16535.237: 73.8435% ( 130) 00:07:59.737 16535.237 - 16636.062: 75.2584% ( 115) 00:07:59.737 16636.062 - 16736.886: 76.5625% ( 106) 00:07:59.737 16736.886 - 16837.711: 77.9158% ( 110) 00:07:59.737 16837.711 - 16938.535: 79.4291% ( 123) 00:07:59.737 16938.535 - 17039.360: 80.5487% ( 91) 00:07:59.737 17039.360 - 17140.185: 81.8529% ( 106) 00:07:59.737 17140.185 - 17241.009: 82.9109% ( 86) 00:07:59.737 17241.009 - 17341.834: 83.7598% ( 69) 00:07:59.737 17341.834 - 17442.658: 84.5842% ( 67) 00:07:59.737 17442.658 - 17543.483: 85.2854% ( 57) 00:07:59.737 17543.483 - 17644.308: 85.9252% ( 52) 00:07:59.737 17644.308 - 17745.132: 86.5773% ( 53) 00:07:59.737 17745.132 - 17845.957: 87.2170% ( 52) 00:07:59.737 17845.957 - 17946.782: 87.9798% ( 62) 00:07:59.737 17946.782 - 18047.606: 89.0502% ( 87) 00:07:59.737 18047.606 - 18148.431: 89.9975% ( 77) 00:07:59.737 18148.431 - 18249.255: 91.2279% ( 100) 00:07:59.737 18249.255 - 18350.080: 92.2490% ( 83) 00:07:59.737 18350.080 - 18450.905: 93.2948% ( 85) 00:07:59.737 18450.905 - 18551.729: 94.0822% ( 64) 00:07:59.737 18551.729 - 18652.554: 94.7712% ( 56) 00:07:59.737 18652.554 - 18753.378: 95.4478% ( 55) 00:07:59.737 18753.378 - 18854.203: 95.9646% ( 42) 00:07:59.737 18854.203 - 18955.028: 96.3214% ( 29) 00:07:59.737 18955.028 - 19055.852: 96.4567% ( 11) 00:07:59.737 19055.852 - 19156.677: 96.5797% ( 10) 00:07:59.737 19156.677 - 19257.502: 96.7028% ( 10) 00:07:59.737 19257.502 - 19358.326: 96.8258% ( 10) 00:07:59.737 19358.326 - 19459.151: 96.9611% ( 11) 00:07:59.737 19459.151 - 19559.975: 97.0595% ( 8) 00:07:59.737 19559.975 - 19660.800: 97.1949% ( 11) 00:07:59.737 19660.800 - 19761.625: 97.3548% ( 13) 00:07:59.737 19761.625 - 19862.449: 97.5148% ( 13) 00:07:59.737 19862.449 - 19963.274: 97.7116% ( 16) 00:07:59.737 19963.274 - 20064.098: 97.8469% ( 11) 00:07:59.737 20064.098 - 20164.923: 97.9700% ( 10) 00:07:59.737 20164.923 - 20265.748: 98.0807% ( 9) 00:07:59.737 20265.748 - 20366.572: 98.1914% ( 9) 00:07:59.737 20366.572 - 20467.397: 98.2653% ( 6) 00:07:59.737 20467.397 - 20568.222: 98.3391% ( 6) 00:07:59.737 20568.222 - 20669.046: 98.3883% ( 4) 00:07:59.737 20669.046 - 20769.871: 98.4252% ( 3) 00:07:59.737 20769.871 - 20870.695: 98.4498% ( 2) 00:07:59.737 20870.695 - 20971.520: 98.4990% ( 4) 00:07:59.737 20971.520 - 21072.345: 98.5482% ( 4) 00:07:59.737 21072.345 - 21173.169: 98.5974% ( 4) 00:07:59.737 21173.169 - 21273.994: 98.6590% ( 5) 00:07:59.737 21273.994 - 21374.818: 98.7205% ( 5) 00:07:59.737 21374.818 - 21475.643: 98.8435% ( 10) 00:07:59.737 21475.643 - 21576.468: 99.0404% ( 16) 00:07:59.737 21576.468 - 21677.292: 99.0896% ( 4) 00:07:59.737 21677.292 - 21778.117: 99.1265% ( 3) 00:07:59.737 21778.117 - 21878.942: 99.1757% ( 4) 00:07:59.737 21878.942 - 21979.766: 99.2126% ( 3) 00:07:59.737 27625.945 - 27827.594: 99.2618% ( 4) 00:07:59.737 27827.594 - 28029.243: 99.3848% ( 10) 00:07:59.737 28029.243 - 28230.892: 99.4587% ( 6) 00:07:59.737 28230.892 - 28432.542: 99.5817% ( 10) 00:07:59.737 28432.542 - 28634.191: 99.7047% ( 10) 00:07:59.737 28634.191 - 28835.840: 99.8278% ( 10) 00:07:59.737 28835.840 - 29037.489: 99.9508% ( 10) 00:07:59.737 29037.489 - 29239.138: 100.0000% ( 4) 00:07:59.737 00:07:59.737 ************************************ 00:07:59.737 END TEST nvme_perf 00:07:59.737 ************************************ 00:07:59.737 23:13:23 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:59.737 00:07:59.737 real 0m2.498s 00:07:59.737 user 0m2.168s 00:07:59.737 sys 0m0.206s 00:07:59.737 23:13:23 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.737 23:13:23 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:59.737 23:13:23 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:59.737 23:13:23 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:59.737 23:13:23 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.737 23:13:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:59.737 ************************************ 00:07:59.737 START TEST nvme_hello_world 00:07:59.737 ************************************ 00:07:59.737 23:13:23 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:59.737 Initializing NVMe Controllers 00:07:59.737 Attached to 0000:00:13.0 00:07:59.737 Namespace ID: 1 size: 1GB 00:07:59.737 Attached to 0000:00:10.0 00:07:59.737 Namespace ID: 1 size: 6GB 00:07:59.737 Attached to 0000:00:11.0 00:07:59.737 Namespace ID: 1 size: 5GB 00:07:59.737 Attached to 0000:00:12.0 00:07:59.737 Namespace ID: 1 size: 4GB 00:07:59.737 Namespace ID: 2 size: 4GB 00:07:59.737 Namespace ID: 3 size: 4GB 00:07:59.737 Initialization complete. 00:07:59.737 INFO: using host memory buffer for IO 00:07:59.737 Hello world! 00:07:59.737 INFO: using host memory buffer for IO 00:07:59.737 Hello world! 00:07:59.737 INFO: using host memory buffer for IO 00:07:59.737 Hello world! 00:07:59.737 INFO: using host memory buffer for IO 00:07:59.737 Hello world! 00:07:59.737 INFO: using host memory buffer for IO 00:07:59.737 Hello world! 00:07:59.737 INFO: using host memory buffer for IO 00:07:59.737 Hello world! 00:07:59.738 00:07:59.738 real 0m0.207s 00:07:59.738 user 0m0.078s 00:07:59.738 sys 0m0.089s 00:07:59.738 23:13:23 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.738 ************************************ 00:07:59.738 END TEST nvme_hello_world 00:07:59.738 ************************************ 00:07:59.738 23:13:23 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:59.738 23:13:23 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:59.738 23:13:23 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:59.738 23:13:23 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.738 23:13:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:59.999 ************************************ 00:07:59.999 START TEST nvme_sgl 00:07:59.999 ************************************ 00:07:59.999 23:13:23 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:59.999 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:59.999 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:59.999 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:59.999 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:59.999 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:59.999 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:59.999 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:59.999 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:59.999 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:59.999 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:59.999 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:59.999 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:59.999 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:59.999 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:59.999 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:59.999 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:59.999 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:59.999 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:59.999 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:59.999 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:59.999 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:59.999 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:59.999 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:59.999 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:59.999 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:59.999 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:59.999 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:59.999 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:59.999 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:59.999 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:59.999 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:59.999 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:59.999 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:59.999 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:59.999 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:59.999 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:59.999 NVMe Readv/Writev Request test 00:07:59.999 Attached to 0000:00:13.0 00:07:59.999 Attached to 0000:00:10.0 00:07:59.999 Attached to 0000:00:11.0 00:07:59.999 Attached to 0000:00:12.0 00:07:59.999 0000:00:10.0: build_io_request_2 test passed 00:07:59.999 0000:00:10.0: build_io_request_4 test passed 00:07:59.999 0000:00:10.0: build_io_request_5 test passed 00:07:59.999 0000:00:10.0: build_io_request_6 test passed 00:07:59.999 0000:00:10.0: build_io_request_7 test passed 00:07:59.999 0000:00:10.0: build_io_request_10 test passed 00:07:59.999 0000:00:11.0: build_io_request_2 test passed 00:07:59.999 0000:00:11.0: build_io_request_4 test passed 00:07:59.999 0000:00:11.0: build_io_request_5 test passed 00:07:59.999 0000:00:11.0: build_io_request_6 test passed 00:07:59.999 0000:00:11.0: build_io_request_7 test passed 00:07:59.999 0000:00:11.0: build_io_request_10 test passed 00:07:59.999 Cleaning up... 00:08:00.259 00:08:00.259 real 0m0.263s 00:08:00.259 user 0m0.143s 00:08:00.259 sys 0m0.073s 00:08:00.259 23:13:23 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.259 23:13:23 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:00.259 ************************************ 00:08:00.259 END TEST nvme_sgl 00:08:00.259 ************************************ 00:08:00.259 23:13:23 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:00.259 23:13:23 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:00.260 23:13:23 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.260 23:13:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:00.260 ************************************ 00:08:00.260 START TEST nvme_e2edp 00:08:00.260 ************************************ 00:08:00.260 23:13:23 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:00.260 NVMe Write/Read with End-to-End data protection test 00:08:00.260 Attached to 0000:00:13.0 00:08:00.260 Attached to 0000:00:10.0 00:08:00.260 Attached to 0000:00:11.0 00:08:00.260 Attached to 0000:00:12.0 00:08:00.260 Cleaning up... 00:08:00.522 ************************************ 00:08:00.522 END TEST nvme_e2edp 00:08:00.522 ************************************ 00:08:00.522 00:08:00.522 real 0m0.197s 00:08:00.522 user 0m0.067s 00:08:00.522 sys 0m0.079s 00:08:00.522 23:13:24 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.522 23:13:24 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:00.522 23:13:24 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:00.522 23:13:24 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:00.522 23:13:24 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.522 23:13:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:00.522 ************************************ 00:08:00.522 START TEST nvme_reserve 00:08:00.522 ************************************ 00:08:00.522 23:13:24 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:00.522 ===================================================== 00:08:00.522 NVMe Controller at PCI bus 0, device 19, function 0 00:08:00.522 ===================================================== 00:08:00.522 Reservations: Not Supported 00:08:00.522 ===================================================== 00:08:00.522 NVMe Controller at PCI bus 0, device 16, function 0 00:08:00.522 ===================================================== 00:08:00.522 Reservations: Not Supported 00:08:00.522 ===================================================== 00:08:00.522 NVMe Controller at PCI bus 0, device 17, function 0 00:08:00.522 ===================================================== 00:08:00.522 Reservations: Not Supported 00:08:00.522 ===================================================== 00:08:00.522 NVMe Controller at PCI bus 0, device 18, function 0 00:08:00.522 ===================================================== 00:08:00.522 Reservations: Not Supported 00:08:00.522 Reservation test passed 00:08:00.783 00:08:00.783 real 0m0.198s 00:08:00.783 user 0m0.053s 00:08:00.783 sys 0m0.097s 00:08:00.783 23:13:24 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.783 23:13:24 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:00.783 ************************************ 00:08:00.783 END TEST nvme_reserve 00:08:00.783 ************************************ 00:08:00.783 23:13:24 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:00.783 23:13:24 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:00.783 23:13:24 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.783 23:13:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:00.783 ************************************ 00:08:00.783 START TEST nvme_err_injection 00:08:00.783 ************************************ 00:08:00.783 23:13:24 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:01.048 NVMe Error Injection test 00:08:01.048 Attached to 0000:00:13.0 00:08:01.048 Attached to 0000:00:10.0 00:08:01.048 Attached to 0000:00:11.0 00:08:01.048 Attached to 0000:00:12.0 00:08:01.048 0000:00:10.0: get features failed as expected 00:08:01.048 0000:00:11.0: get features failed as expected 00:08:01.048 0000:00:12.0: get features failed as expected 00:08:01.048 0000:00:13.0: get features failed as expected 00:08:01.048 0000:00:13.0: get features successfully as expected 00:08:01.048 0000:00:10.0: get features successfully as expected 00:08:01.048 0000:00:11.0: get features successfully as expected 00:08:01.048 0000:00:12.0: get features successfully as expected 00:08:01.048 0000:00:12.0: read failed as expected 00:08:01.048 0000:00:13.0: read failed as expected 00:08:01.048 0000:00:10.0: read failed as expected 00:08:01.048 0000:00:11.0: read failed as expected 00:08:01.048 0000:00:12.0: read successfully as expected 00:08:01.048 0000:00:13.0: read successfully as expected 00:08:01.048 0000:00:10.0: read successfully as expected 00:08:01.048 0000:00:11.0: read successfully as expected 00:08:01.048 Cleaning up... 00:08:01.048 ************************************ 00:08:01.048 END TEST nvme_err_injection 00:08:01.048 ************************************ 00:08:01.048 00:08:01.048 real 0m0.202s 00:08:01.048 user 0m0.082s 00:08:01.048 sys 0m0.074s 00:08:01.048 23:13:24 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.048 23:13:24 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:01.048 23:13:24 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:01.048 23:13:24 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:08:01.048 23:13:24 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.048 23:13:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:01.048 ************************************ 00:08:01.048 START TEST nvme_overhead 00:08:01.048 ************************************ 00:08:01.048 23:13:24 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:02.435 Initializing NVMe Controllers 00:08:02.435 Attached to 0000:00:13.0 00:08:02.435 Attached to 0000:00:10.0 00:08:02.435 Attached to 0000:00:11.0 00:08:02.435 Attached to 0000:00:12.0 00:08:02.435 Initialization complete. Launching workers. 00:08:02.435 submit (in ns) avg, min, max = 15433.2, 11895.4, 263531.5 00:08:02.435 complete (in ns) avg, min, max = 9147.5, 8150.0, 117403.8 00:08:02.435 00:08:02.435 Submit histogram 00:08:02.435 ================ 00:08:02.435 Range in us Cumulative Count 00:08:02.435 11.865 - 11.914: 0.0330% ( 1) 00:08:02.435 12.455 - 12.505: 0.0660% ( 1) 00:08:02.435 12.702 - 12.800: 0.0989% ( 1) 00:08:02.435 13.194 - 13.292: 0.1319% ( 1) 00:08:02.435 13.292 - 13.391: 0.1649% ( 1) 00:08:02.435 13.391 - 13.489: 0.1979% ( 1) 00:08:02.435 13.489 - 13.588: 0.3628% ( 5) 00:08:02.435 13.588 - 13.686: 1.1214% ( 23) 00:08:02.435 13.686 - 13.785: 2.5066% ( 42) 00:08:02.435 13.785 - 13.883: 5.7718% ( 99) 00:08:02.435 13.883 - 13.982: 11.5435% ( 175) 00:08:02.435 13.982 - 14.080: 19.1623% ( 231) 00:08:02.435 14.080 - 14.178: 28.4301% ( 281) 00:08:02.435 14.178 - 14.277: 38.1926% ( 296) 00:08:02.435 14.277 - 14.375: 48.5158% ( 313) 00:08:02.435 14.375 - 14.474: 57.1240% ( 261) 00:08:02.435 14.474 - 14.572: 63.8522% ( 204) 00:08:02.435 14.572 - 14.671: 69.3272% ( 166) 00:08:02.435 14.671 - 14.769: 73.1860% ( 117) 00:08:02.435 14.769 - 14.868: 76.0224% ( 86) 00:08:02.435 14.868 - 14.966: 77.7704% ( 53) 00:08:02.435 14.966 - 15.065: 79.0567% ( 39) 00:08:02.435 15.065 - 15.163: 80.1451% ( 33) 00:08:02.435 15.163 - 15.262: 81.0026% ( 26) 00:08:02.435 15.262 - 15.360: 81.5963% ( 18) 00:08:02.435 15.360 - 15.458: 82.2230% ( 19) 00:08:02.435 15.458 - 15.557: 82.5528% ( 10) 00:08:02.435 15.557 - 15.655: 82.7836% ( 7) 00:08:02.435 15.655 - 15.754: 83.2784% ( 15) 00:08:02.435 15.754 - 15.852: 83.6412% ( 11) 00:08:02.435 15.852 - 15.951: 83.9710% ( 10) 00:08:02.435 15.951 - 16.049: 84.1029% ( 4) 00:08:02.435 16.049 - 16.148: 84.2678% ( 5) 00:08:02.435 16.148 - 16.246: 84.6966% ( 13) 00:08:02.435 16.246 - 16.345: 84.9934% ( 9) 00:08:02.435 16.345 - 16.443: 85.2902% ( 9) 00:08:02.435 16.443 - 16.542: 85.4881% ( 6) 00:08:02.435 16.542 - 16.640: 85.7190% ( 7) 00:08:02.435 16.640 - 16.738: 85.9499% ( 7) 00:08:02.435 16.738 - 16.837: 86.0488% ( 3) 00:08:02.435 16.837 - 16.935: 86.0818% ( 1) 00:08:02.435 16.935 - 17.034: 86.3127% ( 7) 00:08:02.435 17.034 - 17.132: 86.5435% ( 7) 00:08:02.435 17.132 - 17.231: 86.6095% ( 2) 00:08:02.435 17.231 - 17.329: 86.7744% ( 5) 00:08:02.435 17.329 - 17.428: 86.8404% ( 2) 00:08:02.435 17.428 - 17.526: 86.9393% ( 3) 00:08:02.435 17.526 - 17.625: 87.1042% ( 5) 00:08:02.435 17.625 - 17.723: 87.2361% ( 4) 00:08:02.435 17.723 - 17.822: 87.3351% ( 3) 00:08:02.435 17.822 - 17.920: 87.5000% ( 5) 00:08:02.435 17.920 - 18.018: 87.6319% ( 4) 00:08:02.435 18.018 - 18.117: 87.7309% ( 3) 00:08:02.435 18.117 - 18.215: 87.9288% ( 6) 00:08:02.435 18.215 - 18.314: 88.1596% ( 7) 00:08:02.435 18.314 - 18.412: 88.2916% ( 4) 00:08:02.435 18.412 - 18.511: 88.4235% ( 4) 00:08:02.435 18.511 - 18.609: 88.5884% ( 5) 00:08:02.435 18.609 - 18.708: 89.0831% ( 15) 00:08:02.435 18.708 - 18.806: 89.2810% ( 6) 00:08:02.435 18.806 - 18.905: 89.6768% ( 12) 00:08:02.435 18.905 - 19.003: 90.0066% ( 10) 00:08:02.435 19.003 - 19.102: 90.2375% ( 7) 00:08:02.435 19.102 - 19.200: 90.5343% ( 9) 00:08:02.435 19.200 - 19.298: 90.8971% ( 11) 00:08:02.435 19.298 - 19.397: 91.1939% ( 9) 00:08:02.435 19.397 - 19.495: 91.2929% ( 3) 00:08:02.435 19.495 - 19.594: 91.7216% ( 13) 00:08:02.435 19.594 - 19.692: 92.0185% ( 9) 00:08:02.435 19.692 - 19.791: 92.3153% ( 9) 00:08:02.435 19.791 - 19.889: 92.4802% ( 5) 00:08:02.435 19.889 - 19.988: 92.6781% ( 6) 00:08:02.435 19.988 - 20.086: 93.0079% ( 10) 00:08:02.435 20.086 - 20.185: 93.2388% ( 7) 00:08:02.435 20.185 - 20.283: 93.3047% ( 2) 00:08:02.435 20.283 - 20.382: 93.4037% ( 3) 00:08:02.435 20.382 - 20.480: 93.6016% ( 6) 00:08:02.435 20.480 - 20.578: 93.9314% ( 10) 00:08:02.435 20.578 - 20.677: 94.0633% ( 4) 00:08:02.435 20.677 - 20.775: 94.3272% ( 8) 00:08:02.435 20.775 - 20.874: 94.7559% ( 13) 00:08:02.435 20.874 - 20.972: 95.0858% ( 10) 00:08:02.435 20.972 - 21.071: 95.3826% ( 9) 00:08:02.435 21.071 - 21.169: 95.6135% ( 7) 00:08:02.435 21.169 - 21.268: 95.8113% ( 6) 00:08:02.435 21.268 - 21.366: 95.9433% ( 4) 00:08:02.435 21.366 - 21.465: 96.0422% ( 3) 00:08:02.435 21.465 - 21.563: 96.2401% ( 6) 00:08:02.435 21.563 - 21.662: 96.2731% ( 1) 00:08:02.435 21.662 - 21.760: 96.4380% ( 5) 00:08:02.435 21.760 - 21.858: 96.6029% ( 5) 00:08:02.435 21.858 - 21.957: 96.7348% ( 4) 00:08:02.435 21.957 - 22.055: 96.8008% ( 2) 00:08:02.435 22.055 - 22.154: 96.9987% ( 6) 00:08:02.435 22.154 - 22.252: 97.0646% ( 2) 00:08:02.436 22.252 - 22.351: 97.0976% ( 1) 00:08:02.436 22.351 - 22.449: 97.2296% ( 4) 00:08:02.436 22.449 - 22.548: 97.2955% ( 2) 00:08:02.436 22.548 - 22.646: 97.3285% ( 1) 00:08:02.436 22.646 - 22.745: 97.3615% ( 1) 00:08:02.436 22.745 - 22.843: 97.4274% ( 2) 00:08:02.436 22.843 - 22.942: 97.4604% ( 1) 00:08:02.436 22.942 - 23.040: 97.4934% ( 1) 00:08:02.436 23.040 - 23.138: 97.5264% ( 1) 00:08:02.436 23.138 - 23.237: 97.5594% ( 1) 00:08:02.436 23.237 - 23.335: 97.6913% ( 4) 00:08:02.436 23.335 - 23.434: 97.7902% ( 3) 00:08:02.436 23.434 - 23.532: 97.8892% ( 3) 00:08:02.436 23.532 - 23.631: 97.9881% ( 3) 00:08:02.436 23.631 - 23.729: 98.0211% ( 1) 00:08:02.436 23.729 - 23.828: 98.0871% ( 2) 00:08:02.436 23.828 - 23.926: 98.1201% ( 1) 00:08:02.436 23.926 - 24.025: 98.2520% ( 4) 00:08:02.436 24.025 - 24.123: 98.3179% ( 2) 00:08:02.436 24.123 - 24.222: 98.3509% ( 1) 00:08:02.436 24.222 - 24.320: 98.3839% ( 1) 00:08:02.436 24.320 - 24.418: 98.4169% ( 1) 00:08:02.436 24.517 - 24.615: 98.4828% ( 2) 00:08:02.436 24.714 - 24.812: 98.5818% ( 3) 00:08:02.436 24.812 - 24.911: 98.7137% ( 4) 00:08:02.436 25.009 - 25.108: 98.7797% ( 2) 00:08:02.436 25.108 - 25.206: 98.8127% ( 1) 00:08:02.436 25.206 - 25.403: 98.8786% ( 2) 00:08:02.436 25.403 - 25.600: 98.9116% ( 1) 00:08:02.436 25.600 - 25.797: 98.9446% ( 1) 00:08:02.436 25.994 - 26.191: 99.0106% ( 2) 00:08:02.436 26.191 - 26.388: 99.0435% ( 1) 00:08:02.436 26.782 - 26.978: 99.1095% ( 2) 00:08:02.436 27.372 - 27.569: 99.1425% ( 1) 00:08:02.436 27.766 - 27.963: 99.2084% ( 2) 00:08:02.436 27.963 - 28.160: 99.3074% ( 3) 00:08:02.436 28.554 - 28.751: 99.3404% ( 1) 00:08:02.436 29.145 - 29.342: 99.3734% ( 1) 00:08:02.436 29.342 - 29.538: 99.4063% ( 1) 00:08:02.436 29.538 - 29.735: 99.4393% ( 1) 00:08:02.436 29.735 - 29.932: 99.4723% ( 1) 00:08:02.436 30.720 - 30.917: 99.5053% ( 1) 00:08:02.436 31.114 - 31.311: 99.5383% ( 1) 00:08:02.436 31.902 - 32.098: 99.6372% ( 3) 00:08:02.436 32.098 - 32.295: 99.6702% ( 1) 00:08:02.436 32.689 - 32.886: 99.7032% ( 1) 00:08:02.436 35.643 - 35.840: 99.7361% ( 1) 00:08:02.436 39.582 - 39.778: 99.8021% ( 2) 00:08:02.436 40.566 - 40.763: 99.8351% ( 1) 00:08:02.436 42.338 - 42.535: 99.8681% ( 1) 00:08:02.436 43.520 - 43.717: 99.9011% ( 1) 00:08:02.436 48.837 - 49.034: 99.9340% ( 1) 00:08:02.436 67.348 - 67.742: 99.9670% ( 1) 00:08:02.436 263.089 - 264.665: 100.0000% ( 1) 00:08:02.436 00:08:02.436 Complete histogram 00:08:02.436 ================== 00:08:02.436 Range in us Cumulative Count 00:08:02.436 8.123 - 8.172: 0.0660% ( 2) 00:08:02.436 8.172 - 8.222: 0.7916% ( 22) 00:08:02.436 8.222 - 8.271: 3.2322% ( 74) 00:08:02.436 8.271 - 8.320: 7.5198% ( 130) 00:08:02.436 8.320 - 8.369: 14.0501% ( 198) 00:08:02.436 8.369 - 8.418: 22.9881% ( 271) 00:08:02.436 8.418 - 8.468: 33.7731% ( 327) 00:08:02.436 8.468 - 8.517: 44.6240% ( 329) 00:08:02.436 8.517 - 8.566: 53.3311% ( 264) 00:08:02.436 8.566 - 8.615: 61.8074% ( 257) 00:08:02.436 8.615 - 8.665: 67.8100% ( 182) 00:08:02.436 8.665 - 8.714: 73.3509% ( 168) 00:08:02.436 8.714 - 8.763: 77.6385% ( 130) 00:08:02.436 8.763 - 8.812: 81.0026% ( 102) 00:08:02.436 8.812 - 8.862: 83.8391% ( 86) 00:08:02.436 8.862 - 8.911: 86.1807% ( 71) 00:08:02.436 8.911 - 8.960: 87.5330% ( 41) 00:08:02.436 8.960 - 9.009: 88.8522% ( 40) 00:08:02.436 9.009 - 9.058: 89.5449% ( 21) 00:08:02.436 9.058 - 9.108: 90.4024% ( 26) 00:08:02.436 9.108 - 9.157: 90.8311% ( 13) 00:08:02.436 9.157 - 9.206: 91.2269% ( 12) 00:08:02.436 9.206 - 9.255: 91.5237% ( 9) 00:08:02.436 9.255 - 9.305: 91.9525% ( 13) 00:08:02.436 9.305 - 9.354: 92.0844% ( 4) 00:08:02.436 9.354 - 9.403: 92.2493% ( 5) 00:08:02.436 9.403 - 9.452: 92.4802% ( 7) 00:08:02.436 9.452 - 9.502: 92.5792% ( 3) 00:08:02.436 9.502 - 9.551: 92.7111% ( 4) 00:08:02.436 9.551 - 9.600: 92.7770% ( 2) 00:08:02.436 9.600 - 9.649: 92.9749% ( 6) 00:08:02.436 9.649 - 9.698: 93.1069% ( 4) 00:08:02.436 9.698 - 9.748: 93.3047% ( 6) 00:08:02.436 9.748 - 9.797: 93.4037% ( 3) 00:08:02.436 9.797 - 9.846: 93.4697% ( 2) 00:08:02.436 9.846 - 9.895: 93.6346% ( 5) 00:08:02.436 9.895 - 9.945: 93.6675% ( 1) 00:08:02.436 9.945 - 9.994: 93.7995% ( 4) 00:08:02.436 9.994 - 10.043: 93.8654% ( 2) 00:08:02.436 10.043 - 10.092: 93.9314% ( 2) 00:08:02.436 10.142 - 10.191: 93.9644% ( 1) 00:08:02.436 10.240 - 10.289: 93.9974% ( 1) 00:08:02.436 10.289 - 10.338: 94.0303% ( 1) 00:08:02.436 10.388 - 10.437: 94.0963% ( 2) 00:08:02.436 10.437 - 10.486: 94.1623% ( 2) 00:08:02.436 10.585 - 10.634: 94.1953% ( 1) 00:08:02.436 10.732 - 10.782: 94.2942% ( 3) 00:08:02.436 10.831 - 10.880: 94.3272% ( 1) 00:08:02.436 10.880 - 10.929: 94.3602% ( 1) 00:08:02.436 10.978 - 11.028: 94.3931% ( 1) 00:08:02.436 11.028 - 11.077: 94.4261% ( 1) 00:08:02.436 11.077 - 11.126: 94.4591% ( 1) 00:08:02.436 11.126 - 11.175: 94.5251% ( 2) 00:08:02.436 11.175 - 11.225: 94.5580% ( 1) 00:08:02.436 11.225 - 11.274: 94.6240% ( 2) 00:08:02.436 11.323 - 11.372: 94.6900% ( 2) 00:08:02.436 11.372 - 11.422: 94.7230% ( 1) 00:08:02.436 11.422 - 11.471: 94.7559% ( 1) 00:08:02.436 11.471 - 11.520: 94.7889% ( 1) 00:08:02.436 11.618 - 11.668: 94.8549% ( 2) 00:08:02.436 11.668 - 11.717: 94.8879% ( 1) 00:08:02.436 11.717 - 11.766: 94.9208% ( 1) 00:08:02.436 11.766 - 11.815: 95.0528% ( 4) 00:08:02.436 11.963 - 12.012: 95.1187% ( 2) 00:08:02.436 12.160 - 12.209: 95.1517% ( 1) 00:08:02.436 12.209 - 12.258: 95.2507% ( 3) 00:08:02.436 12.258 - 12.308: 95.4156% ( 5) 00:08:02.436 12.308 - 12.357: 95.4485% ( 1) 00:08:02.436 12.357 - 12.406: 95.5145% ( 2) 00:08:02.436 12.406 - 12.455: 95.5805% ( 2) 00:08:02.436 12.455 - 12.505: 95.6464% ( 2) 00:08:02.436 12.505 - 12.554: 95.6794% ( 1) 00:08:02.436 12.554 - 12.603: 95.7454% ( 2) 00:08:02.436 12.603 - 12.702: 95.9763% ( 7) 00:08:02.436 12.702 - 12.800: 96.0752% ( 3) 00:08:02.436 12.800 - 12.898: 96.1082% ( 1) 00:08:02.436 12.898 - 12.997: 96.2731% ( 5) 00:08:02.436 12.997 - 13.095: 96.4380% ( 5) 00:08:02.436 13.095 - 13.194: 96.4710% ( 1) 00:08:02.436 13.194 - 13.292: 96.5699% ( 3) 00:08:02.436 13.292 - 13.391: 96.7018% ( 4) 00:08:02.436 13.391 - 13.489: 96.7678% ( 2) 00:08:02.436 13.489 - 13.588: 96.8338% ( 2) 00:08:02.436 13.588 - 13.686: 96.9327% ( 3) 00:08:02.436 13.686 - 13.785: 96.9657% ( 1) 00:08:02.436 13.785 - 13.883: 97.0976% ( 4) 00:08:02.436 13.883 - 13.982: 97.1966% ( 3) 00:08:02.436 14.080 - 14.178: 97.2296% ( 1) 00:08:02.436 14.178 - 14.277: 97.2625% ( 1) 00:08:02.436 14.375 - 14.474: 97.3945% ( 4) 00:08:02.436 14.572 - 14.671: 97.4274% ( 1) 00:08:02.436 14.671 - 14.769: 97.4604% ( 1) 00:08:02.436 14.769 - 14.868: 97.5923% ( 4) 00:08:02.436 14.868 - 14.966: 97.6583% ( 2) 00:08:02.436 14.966 - 15.065: 97.6913% ( 1) 00:08:02.436 15.065 - 15.163: 97.7243% ( 1) 00:08:02.436 15.360 - 15.458: 97.7902% ( 2) 00:08:02.436 15.458 - 15.557: 97.8562% ( 2) 00:08:02.436 15.557 - 15.655: 97.9551% ( 3) 00:08:02.436 15.655 - 15.754: 98.0211% ( 2) 00:08:02.436 15.754 - 15.852: 98.1530% ( 4) 00:08:02.436 15.852 - 15.951: 98.2520% ( 3) 00:08:02.436 15.951 - 16.049: 98.3179% ( 2) 00:08:02.436 16.049 - 16.148: 98.3839% ( 2) 00:08:02.436 16.148 - 16.246: 98.4169% ( 1) 00:08:02.436 16.246 - 16.345: 98.5158% ( 3) 00:08:02.436 16.345 - 16.443: 98.5818% ( 2) 00:08:02.436 16.640 - 16.738: 98.6148% ( 1) 00:08:02.436 16.738 - 16.837: 98.7137% ( 3) 00:08:02.436 17.132 - 17.231: 98.7467% ( 1) 00:08:02.436 17.231 - 17.329: 98.8456% ( 3) 00:08:02.436 17.329 - 17.428: 98.9116% ( 2) 00:08:02.436 17.625 - 17.723: 98.9446% ( 1) 00:08:02.436 18.117 - 18.215: 98.9776% ( 1) 00:08:02.436 19.495 - 19.594: 99.0106% ( 1) 00:08:02.436 19.594 - 19.692: 99.0435% ( 1) 00:08:02.436 19.692 - 19.791: 99.0765% ( 1) 00:08:02.436 21.268 - 21.366: 99.1095% ( 1) 00:08:02.436 21.662 - 21.760: 99.1425% ( 1) 00:08:02.436 21.760 - 21.858: 99.1755% ( 1) 00:08:02.436 22.252 - 22.351: 99.2084% ( 1) 00:08:02.436 22.351 - 22.449: 99.2414% ( 1) 00:08:02.436 23.434 - 23.532: 99.2744% ( 1) 00:08:02.436 23.729 - 23.828: 99.3074% ( 1) 00:08:02.436 24.320 - 24.418: 99.3404% ( 1) 00:08:02.436 24.911 - 25.009: 99.3734% ( 1) 00:08:02.436 25.108 - 25.206: 99.4063% ( 1) 00:08:02.436 25.403 - 25.600: 99.4393% ( 1) 00:08:02.437 27.372 - 27.569: 99.5053% ( 2) 00:08:02.437 28.357 - 28.554: 99.5383% ( 1) 00:08:02.437 29.342 - 29.538: 99.5712% ( 1) 00:08:02.437 29.932 - 30.129: 99.6042% ( 1) 00:08:02.437 30.129 - 30.326: 99.6372% ( 1) 00:08:02.437 31.311 - 31.508: 99.6702% ( 1) 00:08:02.437 34.068 - 34.265: 99.7032% ( 1) 00:08:02.437 34.265 - 34.462: 99.7361% ( 1) 00:08:02.437 35.249 - 35.446: 99.7691% ( 1) 00:08:02.437 36.825 - 37.022: 99.8021% ( 1) 00:08:02.437 50.018 - 50.215: 99.8351% ( 1) 00:08:02.437 66.954 - 67.348: 99.8681% ( 1) 00:08:02.437 103.188 - 103.975: 99.9011% ( 1) 00:08:02.437 106.338 - 107.126: 99.9340% ( 1) 00:08:02.437 109.489 - 110.277: 99.9670% ( 1) 00:08:02.437 117.366 - 118.154: 100.0000% ( 1) 00:08:02.437 00:08:02.437 00:08:02.437 real 0m1.206s 00:08:02.437 user 0m1.067s 00:08:02.437 sys 0m0.085s 00:08:02.437 23:13:25 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.437 ************************************ 00:08:02.437 23:13:25 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:02.437 END TEST nvme_overhead 00:08:02.437 ************************************ 00:08:02.437 23:13:25 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:02.437 23:13:25 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:02.437 23:13:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.437 23:13:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:02.437 ************************************ 00:08:02.437 START TEST nvme_arbitration 00:08:02.437 ************************************ 00:08:02.437 23:13:25 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:05.762 Initializing NVMe Controllers 00:08:05.762 Attached to 0000:00:13.0 00:08:05.762 Attached to 0000:00:10.0 00:08:05.762 Attached to 0000:00:11.0 00:08:05.762 Attached to 0000:00:12.0 00:08:05.762 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 00:08:05.762 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:08:05.762 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 00:08:05.762 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:05.762 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:05.762 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:05.762 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:05.762 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:05.762 Initialization complete. Launching workers. 00:08:05.762 Starting thread on core 1 with urgent priority queue 00:08:05.762 Starting thread on core 2 with urgent priority queue 00:08:05.762 Starting thread on core 3 with urgent priority queue 00:08:05.762 Starting thread on core 0 with urgent priority queue 00:08:05.762 QEMU NVMe Ctrl (12343 ) core 0: 3861.33 IO/s 25.90 secs/100000 ios 00:08:05.762 QEMU NVMe Ctrl (12342 ) core 0: 3861.33 IO/s 25.90 secs/100000 ios 00:08:05.762 QEMU NVMe Ctrl (12340 ) core 1: 3733.33 IO/s 26.79 secs/100000 ios 00:08:05.762 QEMU NVMe Ctrl (12342 ) core 1: 3733.33 IO/s 26.79 secs/100000 ios 00:08:05.762 QEMU NVMe Ctrl (12341 ) core 2: 3434.67 IO/s 29.11 secs/100000 ios 00:08:05.762 QEMU NVMe Ctrl (12342 ) core 3: 3413.33 IO/s 29.30 secs/100000 ios 00:08:05.762 ======================================================== 00:08:05.762 00:08:05.762 ************************************ 00:08:05.762 END TEST nvme_arbitration 00:08:05.762 ************************************ 00:08:05.762 00:08:05.762 real 0m3.243s 00:08:05.762 user 0m9.023s 00:08:05.762 sys 0m0.107s 00:08:05.762 23:13:29 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.762 23:13:29 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:05.762 23:13:29 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:05.762 23:13:29 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:05.762 23:13:29 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.762 23:13:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:05.762 ************************************ 00:08:05.762 START TEST nvme_single_aen 00:08:05.762 ************************************ 00:08:05.762 23:13:29 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:05.762 Asynchronous Event Request test 00:08:05.762 Attached to 0000:00:13.0 00:08:05.762 Attached to 0000:00:10.0 00:08:05.762 Attached to 0000:00:11.0 00:08:05.762 Attached to 0000:00:12.0 00:08:05.762 Reset controller to setup AER completions for this process 00:08:05.762 Registering asynchronous event callbacks... 00:08:05.762 Getting orig temperature thresholds of all controllers 00:08:05.762 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:05.762 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:05.762 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:05.762 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:05.762 Setting all controllers temperature threshold low to trigger AER 00:08:05.762 Waiting for all controllers temperature threshold to be set lower 00:08:05.762 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:05.763 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:05.763 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:05.763 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:05.763 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:05.763 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:05.763 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:05.763 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:05.763 Waiting for all controllers to trigger AER and reset threshold 00:08:05.763 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:05.763 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:05.763 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:05.763 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:05.763 Cleaning up... 00:08:05.763 00:08:05.763 real 0m0.211s 00:08:05.763 user 0m0.066s 00:08:05.763 sys 0m0.099s 00:08:05.763 23:13:29 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.763 23:13:29 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:05.763 ************************************ 00:08:05.763 END TEST nvme_single_aen 00:08:05.763 ************************************ 00:08:05.763 23:13:29 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:05.763 23:13:29 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:05.763 23:13:29 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.763 23:13:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:05.763 ************************************ 00:08:05.763 START TEST nvme_doorbell_aers 00:08:05.763 ************************************ 00:08:05.763 23:13:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:08:05.763 23:13:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:05.763 23:13:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:05.763 23:13:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:05.763 23:13:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:05.763 23:13:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:05.763 23:13:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:08:05.763 23:13:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:05.763 23:13:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:05.763 23:13:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:06.025 23:13:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:06.025 23:13:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:06.025 23:13:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:06.025 23:13:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:06.025 [2024-11-17 23:13:29.815546] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74736) is not found. Dropping the request. 00:08:16.117 Executing: test_write_invalid_db 00:08:16.117 Waiting for AER completion... 00:08:16.117 Failure: test_write_invalid_db 00:08:16.117 00:08:16.117 Executing: test_invalid_db_write_overflow_sq 00:08:16.117 Waiting for AER completion... 00:08:16.117 Failure: test_invalid_db_write_overflow_sq 00:08:16.117 00:08:16.117 Executing: test_invalid_db_write_overflow_cq 00:08:16.117 Waiting for AER completion... 00:08:16.117 Failure: test_invalid_db_write_overflow_cq 00:08:16.117 00:08:16.117 23:13:39 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:16.117 23:13:39 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:16.117 [2024-11-17 23:13:39.866205] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74736) is not found. Dropping the request. 00:08:26.129 Executing: test_write_invalid_db 00:08:26.129 Waiting for AER completion... 00:08:26.129 Failure: test_write_invalid_db 00:08:26.129 00:08:26.129 Executing: test_invalid_db_write_overflow_sq 00:08:26.129 Waiting for AER completion... 00:08:26.129 Failure: test_invalid_db_write_overflow_sq 00:08:26.129 00:08:26.129 Executing: test_invalid_db_write_overflow_cq 00:08:26.129 Waiting for AER completion... 00:08:26.129 Failure: test_invalid_db_write_overflow_cq 00:08:26.129 00:08:26.129 23:13:49 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:26.129 23:13:49 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:26.129 [2024-11-17 23:13:49.883105] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74736) is not found. Dropping the request. 00:08:36.098 Executing: test_write_invalid_db 00:08:36.098 Waiting for AER completion... 00:08:36.098 Failure: test_write_invalid_db 00:08:36.098 00:08:36.098 Executing: test_invalid_db_write_overflow_sq 00:08:36.098 Waiting for AER completion... 00:08:36.098 Failure: test_invalid_db_write_overflow_sq 00:08:36.098 00:08:36.098 Executing: test_invalid_db_write_overflow_cq 00:08:36.098 Waiting for AER completion... 00:08:36.098 Failure: test_invalid_db_write_overflow_cq 00:08:36.098 00:08:36.098 23:13:59 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:36.098 23:13:59 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:36.098 [2024-11-17 23:13:59.912361] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74736) is not found. Dropping the request. 00:08:46.141 Executing: test_write_invalid_db 00:08:46.141 Waiting for AER completion... 00:08:46.141 Failure: test_write_invalid_db 00:08:46.141 00:08:46.141 Executing: test_invalid_db_write_overflow_sq 00:08:46.141 Waiting for AER completion... 00:08:46.141 Failure: test_invalid_db_write_overflow_sq 00:08:46.141 00:08:46.141 Executing: test_invalid_db_write_overflow_cq 00:08:46.141 Waiting for AER completion... 00:08:46.141 Failure: test_invalid_db_write_overflow_cq 00:08:46.141 00:08:46.141 00:08:46.141 real 0m40.194s 00:08:46.141 user 0m34.248s 00:08:46.141 sys 0m5.522s 00:08:46.141 23:14:09 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.141 23:14:09 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:46.141 ************************************ 00:08:46.141 END TEST nvme_doorbell_aers 00:08:46.141 ************************************ 00:08:46.141 23:14:09 nvme -- nvme/nvme.sh@97 -- # uname 00:08:46.141 23:14:09 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:46.141 23:14:09 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:46.141 23:14:09 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:46.141 23:14:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.141 23:14:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:46.141 ************************************ 00:08:46.141 START TEST nvme_multi_aen 00:08:46.141 ************************************ 00:08:46.141 23:14:09 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:46.141 [2024-11-17 23:14:09.920271] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74736) is not found. Dropping the request. 00:08:46.141 [2024-11-17 23:14:09.920332] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74736) is not found. Dropping the request. 00:08:46.141 [2024-11-17 23:14:09.920343] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74736) is not found. Dropping the request. 00:08:46.141 [2024-11-17 23:14:09.921468] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74736) is not found. Dropping the request. 00:08:46.141 [2024-11-17 23:14:09.921487] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74736) is not found. Dropping the request. 00:08:46.141 [2024-11-17 23:14:09.921493] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74736) is not found. Dropping the request. 00:08:46.141 [2024-11-17 23:14:09.922347] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74736) is not found. Dropping the request. 00:08:46.141 [2024-11-17 23:14:09.922369] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74736) is not found. Dropping the request. 00:08:46.141 [2024-11-17 23:14:09.922376] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74736) is not found. Dropping the request. 00:08:46.141 [2024-11-17 23:14:09.923213] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74736) is not found. Dropping the request. 00:08:46.141 [2024-11-17 23:14:09.923233] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74736) is not found. Dropping the request. 00:08:46.141 [2024-11-17 23:14:09.923240] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74736) is not found. Dropping the request. 00:08:46.141 Child process pid: 75257 00:08:46.399 [Child] Asynchronous Event Request test 00:08:46.399 [Child] Attached to 0000:00:13.0 00:08:46.399 [Child] Attached to 0000:00:10.0 00:08:46.399 [Child] Attached to 0000:00:11.0 00:08:46.399 [Child] Attached to 0000:00:12.0 00:08:46.399 [Child] Registering asynchronous event callbacks... 00:08:46.399 [Child] Getting orig temperature thresholds of all controllers 00:08:46.399 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:46.399 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:46.399 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:46.399 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:46.399 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:46.399 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:46.399 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:46.399 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:46.399 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:46.399 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:46.399 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:46.399 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:46.399 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:46.399 [Child] Cleaning up... 00:08:46.399 Asynchronous Event Request test 00:08:46.399 Attached to 0000:00:13.0 00:08:46.399 Attached to 0000:00:10.0 00:08:46.399 Attached to 0000:00:11.0 00:08:46.399 Attached to 0000:00:12.0 00:08:46.399 Reset controller to setup AER completions for this process 00:08:46.399 Registering asynchronous event callbacks... 00:08:46.399 Getting orig temperature thresholds of all controllers 00:08:46.399 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:46.399 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:46.399 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:46.399 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:46.400 Setting all controllers temperature threshold low to trigger AER 00:08:46.400 Waiting for all controllers temperature threshold to be set lower 00:08:46.400 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:46.400 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:46.400 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:46.400 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:46.400 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:46.400 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:46.400 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:46.400 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:46.400 Waiting for all controllers to trigger AER and reset threshold 00:08:46.400 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:46.400 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:46.400 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:46.400 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:46.400 Cleaning up... 00:08:46.400 00:08:46.400 real 0m0.357s 00:08:46.400 user 0m0.116s 00:08:46.400 sys 0m0.142s 00:08:46.400 23:14:10 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.400 23:14:10 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:46.400 ************************************ 00:08:46.400 END TEST nvme_multi_aen 00:08:46.400 ************************************ 00:08:46.400 23:14:10 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:46.400 23:14:10 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:46.400 23:14:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.400 23:14:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:46.400 ************************************ 00:08:46.400 START TEST nvme_startup 00:08:46.400 ************************************ 00:08:46.400 23:14:10 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:46.658 Initializing NVMe Controllers 00:08:46.658 Attached to 0000:00:13.0 00:08:46.658 Attached to 0000:00:10.0 00:08:46.658 Attached to 0000:00:11.0 00:08:46.658 Attached to 0000:00:12.0 00:08:46.658 Initialization complete. 00:08:46.658 Time used:132380.641 (us). 00:08:46.658 00:08:46.658 real 0m0.181s 00:08:46.658 user 0m0.057s 00:08:46.658 sys 0m0.078s 00:08:46.658 23:14:10 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.658 23:14:10 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:46.658 ************************************ 00:08:46.658 END TEST nvme_startup 00:08:46.658 ************************************ 00:08:46.658 23:14:10 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:46.659 23:14:10 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:46.659 23:14:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.659 23:14:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:46.659 ************************************ 00:08:46.659 START TEST nvme_multi_secondary 00:08:46.659 ************************************ 00:08:46.659 23:14:10 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:46.659 23:14:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=75308 00:08:46.659 23:14:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:46.659 23:14:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=75309 00:08:46.659 23:14:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:46.659 23:14:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:49.936 Initializing NVMe Controllers 00:08:49.936 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:49.936 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:49.936 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:49.936 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:49.936 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:49.936 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:49.936 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:49.936 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:49.936 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:49.936 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:49.936 Initialization complete. Launching workers. 00:08:49.936 ======================================================== 00:08:49.936 Latency(us) 00:08:49.936 Device Information : IOPS MiB/s Average min max 00:08:49.936 PCIE (0000:00:13.0) NSID 1 from core 2: 2820.13 11.02 5673.01 871.13 30831.00 00:08:49.936 PCIE (0000:00:10.0) NSID 1 from core 2: 2820.13 11.02 5672.26 850.66 35060.73 00:08:49.936 PCIE (0000:00:11.0) NSID 1 from core 2: 2820.13 11.02 5682.81 912.62 30535.71 00:08:49.936 PCIE (0000:00:12.0) NSID 1 from core 2: 2820.13 11.02 5682.66 900.81 33952.46 00:08:49.936 PCIE (0000:00:12.0) NSID 2 from core 2: 2820.13 11.02 5682.43 970.91 28861.67 00:08:49.936 PCIE (0000:00:12.0) NSID 3 from core 2: 2820.13 11.02 5683.75 844.13 29701.10 00:08:49.936 ======================================================== 00:08:49.936 Total : 16920.80 66.10 5679.49 844.13 35060.73 00:08:49.936 00:08:49.936 23:14:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 75308 00:08:49.936 Initializing NVMe Controllers 00:08:49.936 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:49.936 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:49.936 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:49.936 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:49.936 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:49.936 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:49.936 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:49.936 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:49.936 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:49.936 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:49.936 Initialization complete. Launching workers. 00:08:49.936 ======================================================== 00:08:49.936 Latency(us) 00:08:49.936 Device Information : IOPS MiB/s Average min max 00:08:49.936 PCIE (0000:00:13.0) NSID 1 from core 1: 6967.81 27.22 2295.76 1053.44 11841.81 00:08:49.936 PCIE (0000:00:10.0) NSID 1 from core 1: 6967.81 27.22 2294.79 1034.17 11222.46 00:08:49.936 PCIE (0000:00:11.0) NSID 1 from core 1: 6967.81 27.22 2295.68 1082.38 10457.99 00:08:49.936 PCIE (0000:00:12.0) NSID 1 from core 1: 6967.81 27.22 2295.65 1079.86 10530.73 00:08:49.936 PCIE (0000:00:12.0) NSID 2 from core 1: 6967.81 27.22 2295.61 1070.39 11829.46 00:08:49.936 PCIE (0000:00:12.0) NSID 3 from core 1: 6967.81 27.22 2295.67 980.80 12068.61 00:08:49.936 ======================================================== 00:08:49.936 Total : 41806.86 163.31 2295.53 980.80 12068.61 00:08:49.936 00:08:51.843 Initializing NVMe Controllers 00:08:51.843 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:51.844 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:51.844 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:51.844 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:51.844 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:51.844 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:51.844 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:51.844 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:51.844 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:51.844 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:51.844 Initialization complete. Launching workers. 00:08:51.844 ======================================================== 00:08:51.844 Latency(us) 00:08:51.844 Device Information : IOPS MiB/s Average min max 00:08:51.844 PCIE (0000:00:13.0) NSID 1 from core 0: 10397.17 40.61 1538.48 696.59 11539.69 00:08:51.844 PCIE (0000:00:10.0) NSID 1 from core 0: 10397.17 40.61 1537.64 669.52 12474.17 00:08:51.844 PCIE (0000:00:11.0) NSID 1 from core 0: 10397.17 40.61 1538.51 686.13 11763.31 00:08:51.844 PCIE (0000:00:12.0) NSID 1 from core 0: 10397.17 40.61 1538.51 693.01 12325.25 00:08:51.844 PCIE (0000:00:12.0) NSID 2 from core 0: 10397.17 40.61 1538.52 700.38 12023.18 00:08:51.844 PCIE (0000:00:12.0) NSID 3 from core 0: 10397.17 40.61 1538.53 694.81 13072.83 00:08:51.844 ======================================================== 00:08:51.844 Total : 62382.99 243.68 1538.36 669.52 13072.83 00:08:51.844 00:08:51.844 23:14:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 75309 00:08:51.844 23:14:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=75378 00:08:51.844 23:14:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=75379 00:08:51.844 23:14:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:51.844 23:14:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:51.844 23:14:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:55.122 Initializing NVMe Controllers 00:08:55.122 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:55.122 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:55.122 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:55.122 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:55.122 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:55.122 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:55.122 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:55.122 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:55.123 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:55.123 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:55.123 Initialization complete. Launching workers. 00:08:55.123 ======================================================== 00:08:55.123 Latency(us) 00:08:55.123 Device Information : IOPS MiB/s Average min max 00:08:55.123 PCIE (0000:00:13.0) NSID 1 from core 1: 8170.08 31.91 1957.94 719.65 13717.77 00:08:55.123 PCIE (0000:00:10.0) NSID 1 from core 1: 8170.08 31.91 1957.06 706.39 14103.36 00:08:55.123 PCIE (0000:00:11.0) NSID 1 from core 1: 8170.08 31.91 1958.33 719.53 13767.39 00:08:55.123 PCIE (0000:00:12.0) NSID 1 from core 1: 8170.08 31.91 1958.64 733.24 12391.02 00:08:55.123 PCIE (0000:00:12.0) NSID 2 from core 1: 8170.08 31.91 1958.88 721.57 12460.65 00:08:55.123 PCIE (0000:00:12.0) NSID 3 from core 1: 8170.08 31.91 1958.99 721.39 13175.71 00:08:55.123 ======================================================== 00:08:55.123 Total : 49020.49 191.49 1958.31 706.39 14103.36 00:08:55.123 00:08:55.123 Initializing NVMe Controllers 00:08:55.123 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:55.123 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:55.123 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:55.123 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:55.123 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:55.123 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:55.123 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:55.123 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:55.123 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:55.123 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:55.123 Initialization complete. Launching workers. 00:08:55.123 ======================================================== 00:08:55.123 Latency(us) 00:08:55.123 Device Information : IOPS MiB/s Average min max 00:08:55.123 PCIE (0000:00:13.0) NSID 1 from core 0: 7743.80 30.25 2065.60 734.82 5398.88 00:08:55.123 PCIE (0000:00:10.0) NSID 1 from core 0: 7743.80 30.25 2064.45 718.15 5483.92 00:08:55.123 PCIE (0000:00:11.0) NSID 1 from core 0: 7743.80 30.25 2065.15 721.74 5759.82 00:08:55.123 PCIE (0000:00:12.0) NSID 1 from core 0: 7743.80 30.25 2064.91 561.29 5496.68 00:08:55.123 PCIE (0000:00:12.0) NSID 2 from core 0: 7743.80 30.25 2064.66 491.61 5330.39 00:08:55.123 PCIE (0000:00:12.0) NSID 3 from core 0: 7743.80 30.25 2064.41 403.12 5192.12 00:08:55.123 ======================================================== 00:08:55.123 Total : 46462.79 181.50 2064.86 403.12 5759.82 00:08:55.123 00:08:57.651 Initializing NVMe Controllers 00:08:57.651 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:57.651 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:57.651 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:57.651 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:57.651 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:57.651 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:57.651 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:57.651 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:57.651 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:57.651 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:57.651 Initialization complete. Launching workers. 00:08:57.651 ======================================================== 00:08:57.651 Latency(us) 00:08:57.651 Device Information : IOPS MiB/s Average min max 00:08:57.651 PCIE (0000:00:13.0) NSID 1 from core 2: 4543.87 17.75 3520.79 759.47 13279.86 00:08:57.651 PCIE (0000:00:10.0) NSID 1 from core 2: 4543.87 17.75 3519.40 726.23 13208.15 00:08:57.651 PCIE (0000:00:11.0) NSID 1 from core 2: 4543.87 17.75 3520.51 732.69 12690.11 00:08:57.651 PCIE (0000:00:12.0) NSID 1 from core 2: 4543.87 17.75 3520.59 646.74 13080.48 00:08:57.651 PCIE (0000:00:12.0) NSID 2 from core 2: 4543.87 17.75 3520.33 544.12 12768.18 00:08:57.651 PCIE (0000:00:12.0) NSID 3 from core 2: 4543.87 17.75 3520.42 450.14 13357.42 00:08:57.651 ======================================================== 00:08:57.651 Total : 27263.20 106.50 3520.34 450.14 13357.42 00:08:57.651 00:08:57.651 23:14:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 75378 00:08:57.651 ************************************ 00:08:57.651 END TEST nvme_multi_secondary 00:08:57.651 ************************************ 00:08:57.651 23:14:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 75379 00:08:57.651 00:08:57.651 real 0m10.609s 00:08:57.651 user 0m18.359s 00:08:57.651 sys 0m0.533s 00:08:57.651 23:14:21 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.651 23:14:21 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:57.651 23:14:21 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:57.651 23:14:21 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:57.651 23:14:21 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/74334 ]] 00:08:57.651 23:14:21 nvme -- common/autotest_common.sh@1094 -- # kill 74334 00:08:57.651 23:14:21 nvme -- common/autotest_common.sh@1095 -- # wait 74334 00:08:57.651 [2024-11-17 23:14:21.042992] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75256) is not found. Dropping the request. 00:08:57.651 [2024-11-17 23:14:21.043080] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75256) is not found. Dropping the request. 00:08:57.651 [2024-11-17 23:14:21.043106] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75256) is not found. Dropping the request. 00:08:57.651 [2024-11-17 23:14:21.043130] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75256) is not found. Dropping the request. 00:08:57.651 [2024-11-17 23:14:21.043799] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75256) is not found. Dropping the request. 00:08:57.651 [2024-11-17 23:14:21.043842] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75256) is not found. Dropping the request. 00:08:57.651 [2024-11-17 23:14:21.043860] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75256) is not found. Dropping the request. 00:08:57.651 [2024-11-17 23:14:21.043912] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75256) is not found. Dropping the request. 00:08:57.651 [2024-11-17 23:14:21.044525] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75256) is not found. Dropping the request. 00:08:57.651 [2024-11-17 23:14:21.044587] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75256) is not found. Dropping the request. 00:08:57.651 [2024-11-17 23:14:21.044616] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75256) is not found. Dropping the request. 00:08:57.651 [2024-11-17 23:14:21.044640] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75256) is not found. Dropping the request. 00:08:57.651 [2024-11-17 23:14:21.045425] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75256) is not found. Dropping the request. 00:08:57.651 [2024-11-17 23:14:21.045664] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75256) is not found. Dropping the request. 00:08:57.651 [2024-11-17 23:14:21.045691] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75256) is not found. Dropping the request. 00:08:57.651 [2024-11-17 23:14:21.045712] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75256) is not found. Dropping the request. 00:08:57.651 [2024-11-17 23:14:21.099023] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:08:57.651 23:14:21 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:57.651 23:14:21 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:57.651 23:14:21 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:57.651 23:14:21 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:57.651 23:14:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.651 23:14:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:57.651 ************************************ 00:08:57.651 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:57.651 ************************************ 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:57.651 * Looking for test storage... 00:08:57.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:57.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.651 --rc genhtml_branch_coverage=1 00:08:57.651 --rc genhtml_function_coverage=1 00:08:57.651 --rc genhtml_legend=1 00:08:57.651 --rc geninfo_all_blocks=1 00:08:57.651 --rc geninfo_unexecuted_blocks=1 00:08:57.651 00:08:57.651 ' 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:57.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.651 --rc genhtml_branch_coverage=1 00:08:57.651 --rc genhtml_function_coverage=1 00:08:57.651 --rc genhtml_legend=1 00:08:57.651 --rc geninfo_all_blocks=1 00:08:57.651 --rc geninfo_unexecuted_blocks=1 00:08:57.651 00:08:57.651 ' 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:57.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.651 --rc genhtml_branch_coverage=1 00:08:57.651 --rc genhtml_function_coverage=1 00:08:57.651 --rc genhtml_legend=1 00:08:57.651 --rc geninfo_all_blocks=1 00:08:57.651 --rc geninfo_unexecuted_blocks=1 00:08:57.651 00:08:57.651 ' 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:57.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.651 --rc genhtml_branch_coverage=1 00:08:57.651 --rc genhtml_function_coverage=1 00:08:57.651 --rc genhtml_legend=1 00:08:57.651 --rc geninfo_all_blocks=1 00:08:57.651 --rc geninfo_unexecuted_blocks=1 00:08:57.651 00:08:57.651 ' 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:57.651 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:57.652 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:57.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.652 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:57.652 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=75540 00:08:57.652 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:57.652 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:57.652 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 75540 00:08:57.652 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 75540 ']' 00:08:57.652 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.652 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:57.652 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.652 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:57.652 23:14:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:57.652 [2024-11-17 23:14:21.395691] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:08:57.652 [2024-11-17 23:14:21.395928] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75540 ] 00:08:57.909 [2024-11-17 23:14:21.551533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:57.909 [2024-11-17 23:14:21.573796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.909 [2024-11-17 23:14:21.573950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:57.909 [2024-11-17 23:14:21.574112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.909 [2024-11-17 23:14:21.574145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:58.475 23:14:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:58.475 23:14:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:08:58.475 23:14:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:58.475 23:14:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.475 23:14:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:58.475 nvme0n1 00:08:58.475 23:14:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.733 23:14:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:58.733 23:14:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_r8vOv.txt 00:08:58.733 23:14:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:58.733 23:14:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.733 23:14:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:58.733 true 00:08:58.733 23:14:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.734 23:14:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:58.734 23:14:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1731885262 00:08:58.734 23:14:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=75563 00:08:58.734 23:14:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:58.734 23:14:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:58.734 23:14:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:00.635 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:00.635 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.635 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:00.635 [2024-11-17 23:14:24.319275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:00.635 [2024-11-17 23:14:24.319516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:00.635 [2024-11-17 23:14:24.319545] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:00.635 [2024-11-17 23:14:24.319560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.635 [2024-11-17 23:14:24.321017] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:09:00.635 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 75563 00:09:00.635 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.635 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 75563 00:09:00.635 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 75563 00:09:00.635 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:00.635 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:00.635 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:00.635 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.635 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:00.635 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.635 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:00.635 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_r8vOv.txt 00:09:00.636 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:00.636 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:00.636 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:00.636 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:00.636 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:00.636 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:00.636 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:00.636 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:00.636 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:00.636 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:00.636 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:00.636 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:00.636 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:00.636 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:00.636 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:00.636 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:00.636 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:00.636 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:00.636 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:00.636 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_r8vOv.txt 00:09:00.636 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 75540 00:09:00.636 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 75540 ']' 00:09:00.636 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 75540 00:09:00.636 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:09:00.636 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:00.636 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75540 00:09:00.636 killing process with pid 75540 00:09:00.636 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:00.636 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:00.636 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75540' 00:09:00.636 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 75540 00:09:00.636 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 75540 00:09:00.894 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:00.894 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:00.894 ************************************ 00:09:00.894 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:00.894 ************************************ 00:09:00.894 00:09:00.894 real 0m3.549s 00:09:00.894 user 0m12.757s 00:09:00.894 sys 0m0.451s 00:09:00.894 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.894 23:14:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:00.894 23:14:24 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:00.894 23:14:24 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:00.894 23:14:24 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:00.894 23:14:24 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.894 23:14:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:00.894 ************************************ 00:09:00.894 START TEST nvme_fio 00:09:00.894 ************************************ 00:09:00.894 23:14:24 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:09:00.894 23:14:24 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:00.894 23:14:24 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:01.153 23:14:24 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:01.153 23:14:24 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:01.153 23:14:24 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:09:01.153 23:14:24 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:01.153 23:14:24 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:01.153 23:14:24 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:01.153 23:14:24 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:01.153 23:14:24 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:01.153 23:14:24 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:01.153 23:14:24 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:01.153 23:14:24 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:01.153 23:14:24 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:01.153 23:14:24 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:01.411 23:14:24 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:01.411 23:14:24 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:01.411 23:14:25 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:01.411 23:14:25 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:01.411 23:14:25 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:01.411 23:14:25 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:01.411 23:14:25 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:01.411 23:14:25 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:01.411 23:14:25 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:01.411 23:14:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:01.411 23:14:25 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:01.411 23:14:25 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:01.411 23:14:25 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:01.411 23:14:25 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:01.411 23:14:25 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:01.411 23:14:25 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:01.411 23:14:25 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:01.411 23:14:25 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:01.411 23:14:25 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:01.411 23:14:25 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:01.669 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:01.669 fio-3.35 00:09:01.669 Starting 1 thread 00:09:08.230 00:09:08.230 test: (groupid=0, jobs=1): err= 0: pid=75692: Sun Nov 17 23:14:31 2024 00:09:08.230 read: IOPS=25.5k, BW=99.4MiB/s (104MB/s)(199MiB/2001msec) 00:09:08.230 slat (nsec): min=4222, max=69151, avg=4789.78, stdev=1790.15 00:09:08.230 clat (usec): min=210, max=11257, avg=2507.47, stdev=685.93 00:09:08.230 lat (usec): min=214, max=11294, avg=2512.26, stdev=687.12 00:09:08.230 clat percentiles (usec): 00:09:08.230 | 1.00th=[ 1516], 5.00th=[ 2089], 10.00th=[ 2278], 20.00th=[ 2311], 00:09:08.230 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2376], 60.00th=[ 2409], 00:09:08.230 | 70.00th=[ 2442], 80.00th=[ 2474], 90.00th=[ 2606], 95.00th=[ 3195], 00:09:08.230 | 99.00th=[ 6128], 99.50th=[ 6325], 99.90th=[ 7635], 99.95th=[ 8586], 00:09:08.230 | 99.99th=[11076] 00:09:08.230 bw ( KiB/s): min=101664, max=104208, per=100.00%, avg=103309.33, stdev=1426.93, samples=3 00:09:08.230 iops : min=25416, max=26052, avg=25827.33, stdev=356.73, samples=3 00:09:08.230 write: IOPS=25.3k, BW=99.0MiB/s (104MB/s)(198MiB/2001msec); 0 zone resets 00:09:08.230 slat (nsec): min=4360, max=97944, avg=5071.05, stdev=1878.32 00:09:08.230 clat (usec): min=226, max=11186, avg=2514.19, stdev=697.90 00:09:08.230 lat (usec): min=231, max=11203, avg=2519.27, stdev=699.10 00:09:08.230 clat percentiles (usec): 00:09:08.230 | 1.00th=[ 1532], 5.00th=[ 2089], 10.00th=[ 2278], 20.00th=[ 2311], 00:09:08.230 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2376], 60.00th=[ 2409], 00:09:08.230 | 70.00th=[ 2442], 80.00th=[ 2474], 90.00th=[ 2606], 95.00th=[ 3294], 00:09:08.230 | 99.00th=[ 6194], 99.50th=[ 6390], 99.90th=[ 7701], 99.95th=[ 8848], 00:09:08.230 | 99.99th=[10945] 00:09:08.230 bw ( KiB/s): min=101288, max=104344, per=100.00%, avg=103120.00, stdev=1616.18, samples=3 00:09:08.230 iops : min=25322, max=26086, avg=25780.00, stdev=404.04, samples=3 00:09:08.230 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.08% 00:09:08.230 lat (msec) : 2=3.92%, 4=92.07%, 10=3.88%, 20=0.03% 00:09:08.230 cpu : usr=99.35%, sys=0.05%, ctx=4, majf=0, minf=626 00:09:08.230 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:08.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:08.230 issued rwts: total=50927,50724,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.230 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:08.230 00:09:08.230 Run status group 0 (all jobs): 00:09:08.230 READ: bw=99.4MiB/s (104MB/s), 99.4MiB/s-99.4MiB/s (104MB/s-104MB/s), io=199MiB (209MB), run=2001-2001msec 00:09:08.230 WRITE: bw=99.0MiB/s (104MB/s), 99.0MiB/s-99.0MiB/s (104MB/s-104MB/s), io=198MiB (208MB), run=2001-2001msec 00:09:08.230 ----------------------------------------------------- 00:09:08.230 Suppressions used: 00:09:08.230 count bytes template 00:09:08.230 1 32 /usr/src/fio/parse.c 00:09:08.230 1 8 libtcmalloc_minimal.so 00:09:08.230 ----------------------------------------------------- 00:09:08.230 00:09:08.230 23:14:31 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:08.230 23:14:31 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:08.230 23:14:31 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:08.230 23:14:31 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:08.488 23:14:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:08.488 23:14:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:08.752 23:14:32 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:08.752 23:14:32 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:08.752 23:14:32 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:08.752 23:14:32 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:08.752 23:14:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:08.752 23:14:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:08.752 23:14:32 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:08.752 23:14:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:08.752 23:14:32 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:08.752 23:14:32 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:08.752 23:14:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:08.752 23:14:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:08.752 23:14:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:08.752 23:14:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:08.752 23:14:32 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:08.752 23:14:32 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:08.752 23:14:32 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:08.752 23:14:32 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:08.752 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:08.752 fio-3.35 00:09:08.752 Starting 1 thread 00:09:16.869 00:09:16.869 test: (groupid=0, jobs=1): err= 0: pid=75747: Sun Nov 17 23:14:39 2024 00:09:16.869 read: IOPS=24.8k, BW=97.0MiB/s (102MB/s)(194MiB/2001msec) 00:09:16.869 slat (nsec): min=3322, max=69308, avg=4881.00, stdev=1942.32 00:09:16.869 clat (usec): min=211, max=8563, avg=2569.47, stdev=718.11 00:09:16.869 lat (usec): min=215, max=8632, avg=2574.35, stdev=719.35 00:09:16.869 clat percentiles (usec): 00:09:16.869 | 1.00th=[ 1729], 5.00th=[ 2212], 10.00th=[ 2278], 20.00th=[ 2311], 00:09:16.869 | 30.00th=[ 2343], 40.00th=[ 2343], 50.00th=[ 2376], 60.00th=[ 2409], 00:09:16.869 | 70.00th=[ 2442], 80.00th=[ 2474], 90.00th=[ 2900], 95.00th=[ 4293], 00:09:16.869 | 99.00th=[ 5735], 99.50th=[ 6194], 99.90th=[ 7308], 99.95th=[ 7439], 00:09:16.869 | 99.99th=[ 8291] 00:09:16.869 bw ( KiB/s): min=97184, max=102896, per=100.00%, avg=99845.33, stdev=2875.83, samples=3 00:09:16.869 iops : min=24296, max=25724, avg=24961.33, stdev=718.96, samples=3 00:09:16.869 write: IOPS=24.7k, BW=96.4MiB/s (101MB/s)(193MiB/2001msec); 0 zone resets 00:09:16.869 slat (nsec): min=3500, max=53247, avg=5170.59, stdev=1959.69 00:09:16.869 clat (usec): min=277, max=8394, avg=2581.99, stdev=733.12 00:09:16.869 lat (usec): min=282, max=8413, avg=2587.16, stdev=734.41 00:09:16.869 clat percentiles (usec): 00:09:16.869 | 1.00th=[ 1745], 5.00th=[ 2212], 10.00th=[ 2278], 20.00th=[ 2311], 00:09:16.869 | 30.00th=[ 2343], 40.00th=[ 2343], 50.00th=[ 2376], 60.00th=[ 2409], 00:09:16.869 | 70.00th=[ 2442], 80.00th=[ 2474], 90.00th=[ 2966], 95.00th=[ 4359], 00:09:16.869 | 99.00th=[ 5735], 99.50th=[ 6194], 99.90th=[ 7308], 99.95th=[ 7439], 00:09:16.869 | 99.99th=[ 8094] 00:09:16.869 bw ( KiB/s): min=96896, max=103040, per=100.00%, avg=99861.33, stdev=3077.55, samples=3 00:09:16.869 iops : min=24224, max=25760, avg=24965.33, stdev=769.39, samples=3 00:09:16.869 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.04% 00:09:16.869 lat (msec) : 2=2.27%, 4=91.73%, 10=5.92% 00:09:16.869 cpu : usr=99.30%, sys=0.05%, ctx=4, majf=0, minf=625 00:09:16.869 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:16.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:16.869 issued rwts: total=49699,49395,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.869 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:16.869 00:09:16.869 Run status group 0 (all jobs): 00:09:16.869 READ: bw=97.0MiB/s (102MB/s), 97.0MiB/s-97.0MiB/s (102MB/s-102MB/s), io=194MiB (204MB), run=2001-2001msec 00:09:16.869 WRITE: bw=96.4MiB/s (101MB/s), 96.4MiB/s-96.4MiB/s (101MB/s-101MB/s), io=193MiB (202MB), run=2001-2001msec 00:09:16.869 ----------------------------------------------------- 00:09:16.869 Suppressions used: 00:09:16.869 count bytes template 00:09:16.869 1 32 /usr/src/fio/parse.c 00:09:16.869 1 8 libtcmalloc_minimal.so 00:09:16.869 ----------------------------------------------------- 00:09:16.869 00:09:16.869 23:14:39 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:16.869 23:14:39 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:16.869 23:14:39 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:16.869 23:14:39 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:16.869 23:14:39 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:16.869 23:14:39 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:16.869 23:14:40 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:16.869 23:14:40 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:16.869 23:14:40 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:16.869 23:14:40 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:16.869 23:14:40 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:16.869 23:14:40 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:16.869 23:14:40 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:16.869 23:14:40 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:16.869 23:14:40 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:16.869 23:14:40 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:16.869 23:14:40 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:16.869 23:14:40 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:16.869 23:14:40 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:16.869 23:14:40 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:16.869 23:14:40 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:16.869 23:14:40 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:16.869 23:14:40 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:16.869 23:14:40 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:16.869 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:16.869 fio-3.35 00:09:16.869 Starting 1 thread 00:09:25.002 00:09:25.002 test: (groupid=0, jobs=1): err= 0: pid=75803: Sun Nov 17 23:14:47 2024 00:09:25.002 read: IOPS=24.3k, BW=94.9MiB/s (99.6MB/s)(190MiB/2001msec) 00:09:25.002 slat (nsec): min=3390, max=59945, avg=4888.01, stdev=2009.31 00:09:25.002 clat (usec): min=204, max=10920, avg=2636.52, stdev=827.33 00:09:25.002 lat (usec): min=209, max=10968, avg=2641.41, stdev=828.56 00:09:25.002 clat percentiles (usec): 00:09:25.002 | 1.00th=[ 1647], 5.00th=[ 2073], 10.00th=[ 2212], 20.00th=[ 2311], 00:09:25.002 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2442], 00:09:25.002 | 70.00th=[ 2474], 80.00th=[ 2638], 90.00th=[ 3326], 95.00th=[ 4686], 00:09:25.002 | 99.00th=[ 5997], 99.50th=[ 6521], 99.90th=[ 7898], 99.95th=[ 8848], 00:09:25.002 | 99.99th=[10814] 00:09:25.002 bw ( KiB/s): min=82280, max=106200, per=97.74%, avg=95034.67, stdev=12038.94, samples=3 00:09:25.002 iops : min=20570, max=26550, avg=23758.67, stdev=3009.74, samples=3 00:09:25.002 write: IOPS=24.2k, BW=94.3MiB/s (98.9MB/s)(189MiB/2001msec); 0 zone resets 00:09:25.002 slat (usec): min=3, max=103, avg= 5.13, stdev= 2.03 00:09:25.002 clat (usec): min=283, max=10840, avg=2627.62, stdev=805.79 00:09:25.002 lat (usec): min=287, max=10858, avg=2632.75, stdev=806.95 00:09:25.002 clat percentiles (usec): 00:09:25.002 | 1.00th=[ 1647], 5.00th=[ 2073], 10.00th=[ 2212], 20.00th=[ 2311], 00:09:25.002 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2442], 00:09:25.002 | 70.00th=[ 2474], 80.00th=[ 2638], 90.00th=[ 3261], 95.00th=[ 4555], 00:09:25.002 | 99.00th=[ 5997], 99.50th=[ 6325], 99.90th=[ 7963], 99.95th=[ 8848], 00:09:25.002 | 99.99th=[10552] 00:09:25.002 bw ( KiB/s): min=82552, max=106160, per=98.38%, avg=95048.00, stdev=11864.70, samples=3 00:09:25.002 iops : min=20638, max=26540, avg=23762.00, stdev=2966.17, samples=3 00:09:25.002 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.04% 00:09:25.002 lat (msec) : 2=3.63%, 4=89.39%, 10=6.88%, 20=0.03% 00:09:25.002 cpu : usr=99.25%, sys=0.05%, ctx=5, majf=0, minf=626 00:09:25.002 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:25.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.002 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:25.002 issued rwts: total=48638,48330,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.002 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:25.002 00:09:25.002 Run status group 0 (all jobs): 00:09:25.002 READ: bw=94.9MiB/s (99.6MB/s), 94.9MiB/s-94.9MiB/s (99.6MB/s-99.6MB/s), io=190MiB (199MB), run=2001-2001msec 00:09:25.002 WRITE: bw=94.3MiB/s (98.9MB/s), 94.3MiB/s-94.3MiB/s (98.9MB/s-98.9MB/s), io=189MiB (198MB), run=2001-2001msec 00:09:25.002 ----------------------------------------------------- 00:09:25.002 Suppressions used: 00:09:25.002 count bytes template 00:09:25.002 1 32 /usr/src/fio/parse.c 00:09:25.002 1 8 libtcmalloc_minimal.so 00:09:25.002 ----------------------------------------------------- 00:09:25.003 00:09:25.003 23:14:47 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:25.003 23:14:47 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:25.003 23:14:47 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:25.003 23:14:47 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:25.003 23:14:47 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:25.003 23:14:47 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:25.003 23:14:48 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:25.003 23:14:48 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:25.003 23:14:48 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:25.003 23:14:48 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:25.003 23:14:48 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:25.003 23:14:48 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:25.003 23:14:48 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:25.003 23:14:48 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:25.003 23:14:48 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:25.003 23:14:48 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:25.003 23:14:48 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:25.003 23:14:48 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:25.003 23:14:48 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:25.003 23:14:48 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:25.003 23:14:48 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:25.003 23:14:48 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:25.003 23:14:48 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:25.003 23:14:48 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:25.003 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:25.003 fio-3.35 00:09:25.003 Starting 1 thread 00:09:33.148 00:09:33.148 test: (groupid=0, jobs=1): err= 0: pid=75858: Sun Nov 17 23:14:55 2024 00:09:33.148 read: IOPS=24.8k, BW=97.0MiB/s (102MB/s)(194MiB/2001msec) 00:09:33.148 slat (nsec): min=3356, max=56522, avg=4866.02, stdev=1821.83 00:09:33.148 clat (usec): min=237, max=10393, avg=2572.26, stdev=672.04 00:09:33.148 lat (usec): min=242, max=10450, avg=2577.12, stdev=673.06 00:09:33.148 clat percentiles (usec): 00:09:33.148 | 1.00th=[ 1598], 5.00th=[ 2089], 10.00th=[ 2245], 20.00th=[ 2311], 00:09:33.148 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2376], 60.00th=[ 2409], 00:09:33.148 | 70.00th=[ 2474], 80.00th=[ 2606], 90.00th=[ 3064], 95.00th=[ 4080], 00:09:33.148 | 99.00th=[ 5473], 99.50th=[ 5669], 99.90th=[ 6915], 99.95th=[ 8455], 00:09:33.148 | 99.99th=[10159] 00:09:33.148 bw ( KiB/s): min=91936, max=101352, per=98.83%, avg=98192.00, stdev=5417.95, samples=3 00:09:33.148 iops : min=22984, max=25338, avg=24548.00, stdev=1354.49, samples=3 00:09:33.148 write: IOPS=24.7k, BW=96.4MiB/s (101MB/s)(193MiB/2001msec); 0 zone resets 00:09:33.148 slat (nsec): min=3450, max=54552, avg=5093.68, stdev=1811.96 00:09:33.148 clat (usec): min=223, max=10279, avg=2578.33, stdev=677.75 00:09:33.148 lat (usec): min=227, max=10298, avg=2583.43, stdev=678.75 00:09:33.148 clat percentiles (usec): 00:09:33.148 | 1.00th=[ 1614], 5.00th=[ 2089], 10.00th=[ 2245], 20.00th=[ 2311], 00:09:33.148 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2376], 60.00th=[ 2409], 00:09:33.148 | 70.00th=[ 2474], 80.00th=[ 2638], 90.00th=[ 3064], 95.00th=[ 4146], 00:09:33.148 | 99.00th=[ 5538], 99.50th=[ 5669], 99.90th=[ 6915], 99.95th=[ 8717], 00:09:33.148 | 99.99th=[10028] 00:09:33.148 bw ( KiB/s): min=91240, max=102128, per=99.45%, avg=98205.33, stdev=6048.18, samples=3 00:09:33.148 iops : min=22810, max=25532, avg=24551.33, stdev=1512.05, samples=3 00:09:33.148 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.04% 00:09:33.148 lat (msec) : 2=3.50%, 4=91.08%, 10=5.35%, 20=0.01% 00:09:33.148 cpu : usr=99.15%, sys=0.20%, ctx=4, majf=0, minf=625 00:09:33.148 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:33.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.148 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:33.148 issued rwts: total=49702,49399,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.148 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:33.148 00:09:33.148 Run status group 0 (all jobs): 00:09:33.148 READ: bw=97.0MiB/s (102MB/s), 97.0MiB/s-97.0MiB/s (102MB/s-102MB/s), io=194MiB (204MB), run=2001-2001msec 00:09:33.148 WRITE: bw=96.4MiB/s (101MB/s), 96.4MiB/s-96.4MiB/s (101MB/s-101MB/s), io=193MiB (202MB), run=2001-2001msec 00:09:33.149 ----------------------------------------------------- 00:09:33.149 Suppressions used: 00:09:33.149 count bytes template 00:09:33.149 1 32 /usr/src/fio/parse.c 00:09:33.149 1 8 libtcmalloc_minimal.so 00:09:33.149 ----------------------------------------------------- 00:09:33.149 00:09:33.149 ************************************ 00:09:33.149 END TEST nvme_fio 00:09:33.149 ************************************ 00:09:33.149 23:14:56 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:33.149 23:14:56 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:33.149 00:09:33.149 real 0m31.355s 00:09:33.149 user 0m16.906s 00:09:33.149 sys 0m27.928s 00:09:33.149 23:14:56 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.149 23:14:56 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:33.149 ************************************ 00:09:33.149 END TEST nvme 00:09:33.149 ************************************ 00:09:33.149 00:09:33.149 real 1m40.156s 00:09:33.149 user 3m33.919s 00:09:33.149 sys 0m38.307s 00:09:33.149 23:14:56 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.149 23:14:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:33.149 23:14:56 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:33.149 23:14:56 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:33.149 23:14:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.149 23:14:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.149 23:14:56 -- common/autotest_common.sh@10 -- # set +x 00:09:33.149 ************************************ 00:09:33.149 START TEST nvme_scc 00:09:33.149 ************************************ 00:09:33.149 23:14:56 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:33.149 * Looking for test storage... 00:09:33.149 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:33.149 23:14:56 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:33.149 23:14:56 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:33.149 23:14:56 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:33.149 23:14:56 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:33.149 23:14:56 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:33.149 23:14:56 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:33.149 23:14:56 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:33.149 23:14:56 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.149 23:14:56 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:33.149 23:14:56 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:33.149 23:14:56 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:33.149 23:14:56 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:33.149 23:14:56 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:33.149 23:14:56 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:33.149 23:14:56 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:33.149 23:14:56 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:33.149 23:14:56 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:33.149 23:14:56 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:33.149 23:14:56 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.149 23:14:56 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:33.149 23:14:56 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:33.149 23:14:56 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.149 23:14:56 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:33.149 23:14:56 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:33.149 23:14:56 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:33.149 23:14:56 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:33.149 23:14:56 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.149 23:14:56 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:33.149 23:14:56 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:33.149 23:14:56 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:33.149 23:14:56 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:33.149 23:14:56 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:33.149 23:14:56 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.149 23:14:56 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:33.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.149 --rc genhtml_branch_coverage=1 00:09:33.149 --rc genhtml_function_coverage=1 00:09:33.149 --rc genhtml_legend=1 00:09:33.149 --rc geninfo_all_blocks=1 00:09:33.149 --rc geninfo_unexecuted_blocks=1 00:09:33.149 00:09:33.149 ' 00:09:33.149 23:14:56 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:33.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.149 --rc genhtml_branch_coverage=1 00:09:33.149 --rc genhtml_function_coverage=1 00:09:33.149 --rc genhtml_legend=1 00:09:33.149 --rc geninfo_all_blocks=1 00:09:33.149 --rc geninfo_unexecuted_blocks=1 00:09:33.149 00:09:33.149 ' 00:09:33.149 23:14:56 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:33.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.149 --rc genhtml_branch_coverage=1 00:09:33.149 --rc genhtml_function_coverage=1 00:09:33.149 --rc genhtml_legend=1 00:09:33.149 --rc geninfo_all_blocks=1 00:09:33.149 --rc geninfo_unexecuted_blocks=1 00:09:33.149 00:09:33.149 ' 00:09:33.149 23:14:56 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:33.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.149 --rc genhtml_branch_coverage=1 00:09:33.149 --rc genhtml_function_coverage=1 00:09:33.149 --rc genhtml_legend=1 00:09:33.149 --rc geninfo_all_blocks=1 00:09:33.149 --rc geninfo_unexecuted_blocks=1 00:09:33.149 00:09:33.149 ' 00:09:33.149 23:14:56 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:33.149 23:14:56 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:33.149 23:14:56 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:33.149 23:14:56 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:33.149 23:14:56 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:33.149 23:14:56 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:33.149 23:14:56 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.149 23:14:56 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.149 23:14:56 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.149 23:14:56 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.149 23:14:56 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.149 23:14:56 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.149 23:14:56 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:33.149 23:14:56 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.149 23:14:56 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:33.149 23:14:56 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:33.149 23:14:56 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:33.149 23:14:56 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:33.149 23:14:56 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:33.149 23:14:56 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:33.149 23:14:56 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:33.149 23:14:56 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:33.149 23:14:56 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:33.149 23:14:56 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:33.149 23:14:56 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:33.149 23:14:56 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:33.149 23:14:56 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:33.149 23:14:56 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:33.149 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:33.149 Waiting for block devices as requested 00:09:33.149 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:33.149 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:33.149 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:33.149 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:38.465 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:38.465 23:15:02 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:38.465 23:15:02 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:38.465 23:15:02 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:38.465 23:15:02 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:38.465 23:15:02 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:38.465 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:38.466 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:38.467 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:38.468 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:38.469 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:38.470 23:15:02 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:38.471 23:15:02 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:38.471 23:15:02 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:38.471 23:15:02 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:38.471 23:15:02 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:38.471 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:38.472 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:38.473 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:38.474 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.475 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:38.476 23:15:02 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:38.476 23:15:02 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:38.476 23:15:02 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:38.476 23:15:02 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:38.476 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:38.477 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:38.478 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:38.479 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:38.480 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:38.481 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.482 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:38.483 23:15:02 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:38.484 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:38.485 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:38.749 23:15:02 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:38.749 23:15:02 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:38.749 23:15:02 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:38.749 23:15:02 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.749 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.750 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.751 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:38.752 23:15:02 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:38.752 23:15:02 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:38.752 23:15:02 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:38.752 23:15:02 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:38.752 23:15:02 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:39.015 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:39.587 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:39.587 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:39.848 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:39.848 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:39.848 23:15:03 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:39.848 23:15:03 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:39.848 23:15:03 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.848 23:15:03 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:39.848 ************************************ 00:09:39.848 START TEST nvme_simple_copy 00:09:39.848 ************************************ 00:09:39.848 23:15:03 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:40.109 Initializing NVMe Controllers 00:09:40.109 Attaching to 0000:00:10.0 00:09:40.109 Controller supports SCC. Attached to 0000:00:10.0 00:09:40.109 Namespace ID: 1 size: 6GB 00:09:40.109 Initialization complete. 00:09:40.109 00:09:40.109 Controller QEMU NVMe Ctrl (12340 ) 00:09:40.109 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:40.109 Namespace Block Size:4096 00:09:40.109 Writing LBAs 0 to 63 with Random Data 00:09:40.109 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:40.109 LBAs matching Written Data: 64 00:09:40.109 00:09:40.109 real 0m0.253s 00:09:40.109 user 0m0.095s 00:09:40.109 sys 0m0.057s 00:09:40.109 23:15:03 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.109 ************************************ 00:09:40.109 END TEST nvme_simple_copy 00:09:40.109 ************************************ 00:09:40.109 23:15:03 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:40.109 00:09:40.109 real 0m7.733s 00:09:40.109 user 0m1.050s 00:09:40.109 sys 0m1.439s 00:09:40.109 23:15:03 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.109 23:15:03 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:40.109 ************************************ 00:09:40.109 END TEST nvme_scc 00:09:40.109 ************************************ 00:09:40.109 23:15:03 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:40.109 23:15:03 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:40.109 23:15:03 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:40.109 23:15:03 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:40.109 23:15:03 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:40.109 23:15:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:40.109 23:15:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.109 23:15:03 -- common/autotest_common.sh@10 -- # set +x 00:09:40.370 ************************************ 00:09:40.370 START TEST nvme_fdp 00:09:40.370 ************************************ 00:09:40.370 23:15:03 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:40.370 * Looking for test storage... 00:09:40.370 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:40.370 23:15:04 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:40.370 23:15:04 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:40.370 23:15:04 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:40.370 23:15:04 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:40.370 23:15:04 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:40.370 23:15:04 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:40.370 23:15:04 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:40.370 23:15:04 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:40.370 23:15:04 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:40.370 23:15:04 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:40.370 23:15:04 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:40.370 23:15:04 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:40.370 23:15:04 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:40.370 23:15:04 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:40.370 23:15:04 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:40.370 23:15:04 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:40.370 23:15:04 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:40.370 23:15:04 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:40.370 23:15:04 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:40.370 23:15:04 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:40.370 23:15:04 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:40.370 23:15:04 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:40.370 23:15:04 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:40.370 23:15:04 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:40.370 23:15:04 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:40.370 23:15:04 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:40.370 23:15:04 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:40.370 23:15:04 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:40.370 23:15:04 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:40.370 23:15:04 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:40.370 23:15:04 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:40.370 23:15:04 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:40.371 23:15:04 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:40.371 23:15:04 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:40.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.371 --rc genhtml_branch_coverage=1 00:09:40.371 --rc genhtml_function_coverage=1 00:09:40.371 --rc genhtml_legend=1 00:09:40.371 --rc geninfo_all_blocks=1 00:09:40.371 --rc geninfo_unexecuted_blocks=1 00:09:40.371 00:09:40.371 ' 00:09:40.371 23:15:04 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:40.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.371 --rc genhtml_branch_coverage=1 00:09:40.371 --rc genhtml_function_coverage=1 00:09:40.371 --rc genhtml_legend=1 00:09:40.371 --rc geninfo_all_blocks=1 00:09:40.371 --rc geninfo_unexecuted_blocks=1 00:09:40.371 00:09:40.371 ' 00:09:40.371 23:15:04 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:40.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.371 --rc genhtml_branch_coverage=1 00:09:40.371 --rc genhtml_function_coverage=1 00:09:40.371 --rc genhtml_legend=1 00:09:40.371 --rc geninfo_all_blocks=1 00:09:40.371 --rc geninfo_unexecuted_blocks=1 00:09:40.371 00:09:40.371 ' 00:09:40.371 23:15:04 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:40.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.371 --rc genhtml_branch_coverage=1 00:09:40.371 --rc genhtml_function_coverage=1 00:09:40.371 --rc genhtml_legend=1 00:09:40.371 --rc geninfo_all_blocks=1 00:09:40.371 --rc geninfo_unexecuted_blocks=1 00:09:40.371 00:09:40.371 ' 00:09:40.371 23:15:04 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:40.371 23:15:04 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:40.371 23:15:04 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:40.371 23:15:04 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:40.371 23:15:04 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:40.371 23:15:04 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:40.371 23:15:04 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:40.371 23:15:04 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:40.371 23:15:04 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:40.371 23:15:04 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.371 23:15:04 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.371 23:15:04 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.371 23:15:04 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:40.371 23:15:04 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.371 23:15:04 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:40.371 23:15:04 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:40.371 23:15:04 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:40.371 23:15:04 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:40.371 23:15:04 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:40.371 23:15:04 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:40.371 23:15:04 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:40.371 23:15:04 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:40.371 23:15:04 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:40.371 23:15:04 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:40.371 23:15:04 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:40.632 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:40.893 Waiting for block devices as requested 00:09:40.893 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:40.893 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:41.156 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:41.156 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:46.453 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:46.453 23:15:09 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:46.453 23:15:09 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:46.453 23:15:09 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:46.453 23:15:09 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:46.453 23:15:09 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:46.453 23:15:09 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:46.453 23:15:09 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:46.453 23:15:09 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:46.453 23:15:09 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:46.453 23:15:09 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:46.453 23:15:09 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:46.453 23:15:09 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:46.453 23:15:09 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:46.453 23:15:09 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:46.453 23:15:09 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:46.453 23:15:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:46.454 23:15:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.454 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:46.455 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:46.456 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:46.457 23:15:10 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:46.457 23:15:10 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:46.457 23:15:10 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:46.457 23:15:10 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.457 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:46.458 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.459 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:46.460 23:15:10 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:46.460 23:15:10 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:46.460 23:15:10 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:46.460 23:15:10 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:46.461 23:15:10 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.461 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.462 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:46.463 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.464 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:46.465 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:46.466 23:15:10 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:46.466 23:15:10 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:46.466 23:15:10 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:46.466 23:15:10 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.466 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:46.730 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.731 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.732 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:46.733 23:15:10 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:46.733 23:15:10 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:46.733 23:15:10 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:46.733 23:15:10 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:46.733 23:15:10 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:47.304 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:47.876 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:47.876 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:47.876 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:47.876 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:47.876 23:15:11 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:47.876 23:15:11 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:47.876 23:15:11 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.876 23:15:11 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:47.876 ************************************ 00:09:47.876 START TEST nvme_flexible_data_placement 00:09:47.876 ************************************ 00:09:47.876 23:15:11 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:48.138 Initializing NVMe Controllers 00:09:48.138 Attaching to 0000:00:13.0 00:09:48.138 Controller supports FDP Attached to 0000:00:13.0 00:09:48.138 Namespace ID: 1 Endurance Group ID: 1 00:09:48.138 Initialization complete. 00:09:48.138 00:09:48.138 ================================== 00:09:48.138 == FDP tests for Namespace: #01 == 00:09:48.138 ================================== 00:09:48.138 00:09:48.138 Get Feature: FDP: 00:09:48.138 ================= 00:09:48.138 Enabled: Yes 00:09:48.138 FDP configuration Index: 0 00:09:48.138 00:09:48.138 FDP configurations log page 00:09:48.138 =========================== 00:09:48.138 Number of FDP configurations: 1 00:09:48.138 Version: 0 00:09:48.138 Size: 112 00:09:48.138 FDP Configuration Descriptor: 0 00:09:48.138 Descriptor Size: 96 00:09:48.138 Reclaim Group Identifier format: 2 00:09:48.138 FDP Volatile Write Cache: Not Present 00:09:48.138 FDP Configuration: Valid 00:09:48.138 Vendor Specific Size: 0 00:09:48.138 Number of Reclaim Groups: 2 00:09:48.138 Number of Recalim Unit Handles: 8 00:09:48.138 Max Placement Identifiers: 128 00:09:48.138 Number of Namespaces Suppprted: 256 00:09:48.138 Reclaim unit Nominal Size: 6000000 bytes 00:09:48.138 Estimated Reclaim Unit Time Limit: Not Reported 00:09:48.138 RUH Desc #000: RUH Type: Initially Isolated 00:09:48.138 RUH Desc #001: RUH Type: Initially Isolated 00:09:48.138 RUH Desc #002: RUH Type: Initially Isolated 00:09:48.138 RUH Desc #003: RUH Type: Initially Isolated 00:09:48.138 RUH Desc #004: RUH Type: Initially Isolated 00:09:48.138 RUH Desc #005: RUH Type: Initially Isolated 00:09:48.138 RUH Desc #006: RUH Type: Initially Isolated 00:09:48.138 RUH Desc #007: RUH Type: Initially Isolated 00:09:48.138 00:09:48.138 FDP reclaim unit handle usage log page 00:09:48.138 ====================================== 00:09:48.138 Number of Reclaim Unit Handles: 8 00:09:48.138 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:48.138 RUH Usage Desc #001: RUH Attributes: Unused 00:09:48.138 RUH Usage Desc #002: RUH Attributes: Unused 00:09:48.138 RUH Usage Desc #003: RUH Attributes: Unused 00:09:48.138 RUH Usage Desc #004: RUH Attributes: Unused 00:09:48.138 RUH Usage Desc #005: RUH Attributes: Unused 00:09:48.138 RUH Usage Desc #006: RUH Attributes: Unused 00:09:48.138 RUH Usage Desc #007: RUH Attributes: Unused 00:09:48.138 00:09:48.138 FDP statistics log page 00:09:48.138 ======================= 00:09:48.138 Host bytes with metadata written: 1571758080 00:09:48.138 Media bytes with metadata written: 1571975168 00:09:48.138 Media bytes erased: 0 00:09:48.138 00:09:48.138 FDP Reclaim unit handle status 00:09:48.138 ============================== 00:09:48.138 Number of RUHS descriptors: 2 00:09:48.138 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x000000000000250e 00:09:48.138 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:48.138 00:09:48.138 FDP write on placement id: 0 success 00:09:48.138 00:09:48.138 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:48.138 00:09:48.138 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:48.138 00:09:48.138 Get Feature: FDP Events for Placement handle: #0 00:09:48.138 ======================== 00:09:48.138 Number of FDP Events: 6 00:09:48.138 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:48.138 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:48.138 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:48.138 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:48.138 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:48.138 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:48.138 00:09:48.138 FDP events log page 00:09:48.138 =================== 00:09:48.138 Number of FDP events: 1 00:09:48.138 FDP Event #0: 00:09:48.138 Event Type: RU Not Written to Capacity 00:09:48.138 Placement Identifier: Valid 00:09:48.138 NSID: Valid 00:09:48.138 Location: Valid 00:09:48.138 Placement Identifier: 0 00:09:48.138 Event Timestamp: 3 00:09:48.138 Namespace Identifier: 1 00:09:48.138 Reclaim Group Identifier: 0 00:09:48.138 Reclaim Unit Handle Identifier: 0 00:09:48.138 00:09:48.138 FDP test passed 00:09:48.138 00:09:48.138 real 0m0.217s 00:09:48.138 user 0m0.061s 00:09:48.138 sys 0m0.054s 00:09:48.138 23:15:11 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:48.138 ************************************ 00:09:48.138 END TEST nvme_flexible_data_placement 00:09:48.138 ************************************ 00:09:48.138 23:15:11 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:48.138 ************************************ 00:09:48.138 END TEST nvme_fdp 00:09:48.138 ************************************ 00:09:48.138 00:09:48.138 real 0m7.901s 00:09:48.138 user 0m1.068s 00:09:48.138 sys 0m1.507s 00:09:48.138 23:15:11 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:48.138 23:15:11 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:48.138 23:15:11 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:48.138 23:15:11 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:48.138 23:15:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:48.138 23:15:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:48.138 23:15:11 -- common/autotest_common.sh@10 -- # set +x 00:09:48.138 ************************************ 00:09:48.138 START TEST nvme_rpc 00:09:48.138 ************************************ 00:09:48.138 23:15:11 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:48.401 * Looking for test storage... 00:09:48.401 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:48.401 23:15:11 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:48.401 23:15:11 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:48.401 23:15:11 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:48.401 23:15:12 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:48.401 23:15:12 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:48.401 23:15:12 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:48.401 23:15:12 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:48.401 23:15:12 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:48.401 23:15:12 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:48.401 23:15:12 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:48.401 23:15:12 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:48.401 23:15:12 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:48.401 23:15:12 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:48.401 23:15:12 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:48.401 23:15:12 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:48.401 23:15:12 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:48.401 23:15:12 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:48.401 23:15:12 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:48.401 23:15:12 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:48.401 23:15:12 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:48.401 23:15:12 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:48.401 23:15:12 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:48.401 23:15:12 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:48.401 23:15:12 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:48.401 23:15:12 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:48.401 23:15:12 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:48.401 23:15:12 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:48.401 23:15:12 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:48.401 23:15:12 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:48.401 23:15:12 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:48.401 23:15:12 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:48.401 23:15:12 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:48.401 23:15:12 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:48.401 23:15:12 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:48.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.401 --rc genhtml_branch_coverage=1 00:09:48.401 --rc genhtml_function_coverage=1 00:09:48.401 --rc genhtml_legend=1 00:09:48.401 --rc geninfo_all_blocks=1 00:09:48.401 --rc geninfo_unexecuted_blocks=1 00:09:48.401 00:09:48.401 ' 00:09:48.401 23:15:12 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:48.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.401 --rc genhtml_branch_coverage=1 00:09:48.401 --rc genhtml_function_coverage=1 00:09:48.401 --rc genhtml_legend=1 00:09:48.401 --rc geninfo_all_blocks=1 00:09:48.401 --rc geninfo_unexecuted_blocks=1 00:09:48.401 00:09:48.401 ' 00:09:48.401 23:15:12 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:48.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.401 --rc genhtml_branch_coverage=1 00:09:48.401 --rc genhtml_function_coverage=1 00:09:48.401 --rc genhtml_legend=1 00:09:48.401 --rc geninfo_all_blocks=1 00:09:48.401 --rc geninfo_unexecuted_blocks=1 00:09:48.401 00:09:48.401 ' 00:09:48.401 23:15:12 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:48.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.401 --rc genhtml_branch_coverage=1 00:09:48.401 --rc genhtml_function_coverage=1 00:09:48.401 --rc genhtml_legend=1 00:09:48.401 --rc geninfo_all_blocks=1 00:09:48.401 --rc geninfo_unexecuted_blocks=1 00:09:48.401 00:09:48.401 ' 00:09:48.401 23:15:12 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:48.401 23:15:12 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:48.401 23:15:12 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:48.401 23:15:12 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:48.401 23:15:12 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:48.401 23:15:12 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:48.401 23:15:12 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:48.401 23:15:12 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:48.401 23:15:12 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:48.401 23:15:12 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:48.401 23:15:12 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:48.401 23:15:12 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:48.401 23:15:12 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:48.401 23:15:12 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:48.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.401 23:15:12 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:48.401 23:15:12 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=77215 00:09:48.401 23:15:12 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:48.401 23:15:12 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:48.401 23:15:12 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 77215 00:09:48.401 23:15:12 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 77215 ']' 00:09:48.401 23:15:12 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.401 23:15:12 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:48.401 23:15:12 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.402 23:15:12 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:48.402 23:15:12 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:48.402 [2024-11-17 23:15:12.196548] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:48.402 [2024-11-17 23:15:12.196942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77215 ] 00:09:48.663 [2024-11-17 23:15:12.346280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:48.663 [2024-11-17 23:15:12.378448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:48.663 [2024-11-17 23:15:12.378510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.606 23:15:13 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:49.606 23:15:13 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:49.606 23:15:13 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:49.606 Nvme0n1 00:09:49.606 23:15:13 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:49.606 23:15:13 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:49.867 request: 00:09:49.868 { 00:09:49.868 "bdev_name": "Nvme0n1", 00:09:49.868 "filename": "non_existing_file", 00:09:49.868 "method": "bdev_nvme_apply_firmware", 00:09:49.868 "req_id": 1 00:09:49.868 } 00:09:49.868 Got JSON-RPC error response 00:09:49.868 response: 00:09:49.868 { 00:09:49.868 "code": -32603, 00:09:49.868 "message": "open file failed." 00:09:49.868 } 00:09:49.868 23:15:13 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:49.868 23:15:13 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:49.868 23:15:13 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:50.170 23:15:13 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:50.170 23:15:13 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 77215 00:09:50.170 23:15:13 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 77215 ']' 00:09:50.170 23:15:13 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 77215 00:09:50.170 23:15:13 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:50.170 23:15:13 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:50.170 23:15:13 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77215 00:09:50.170 killing process with pid 77215 00:09:50.170 23:15:13 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:50.170 23:15:13 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:50.170 23:15:13 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77215' 00:09:50.170 23:15:13 nvme_rpc -- common/autotest_common.sh@973 -- # kill 77215 00:09:50.170 23:15:13 nvme_rpc -- common/autotest_common.sh@978 -- # wait 77215 00:09:50.444 ************************************ 00:09:50.444 END TEST nvme_rpc 00:09:50.444 ************************************ 00:09:50.444 00:09:50.444 real 0m2.129s 00:09:50.444 user 0m4.132s 00:09:50.444 sys 0m0.565s 00:09:50.444 23:15:14 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.444 23:15:14 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.444 23:15:14 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:50.444 23:15:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:50.444 23:15:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.444 23:15:14 -- common/autotest_common.sh@10 -- # set +x 00:09:50.444 ************************************ 00:09:50.444 START TEST nvme_rpc_timeouts 00:09:50.444 ************************************ 00:09:50.444 23:15:14 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:50.444 * Looking for test storage... 00:09:50.444 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:50.444 23:15:14 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:50.444 23:15:14 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:09:50.444 23:15:14 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:50.444 23:15:14 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:50.444 23:15:14 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:50.444 23:15:14 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:50.444 23:15:14 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:50.444 23:15:14 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:50.444 23:15:14 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:50.444 23:15:14 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:50.444 23:15:14 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:50.444 23:15:14 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:50.444 23:15:14 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:50.444 23:15:14 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:50.444 23:15:14 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:50.444 23:15:14 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:50.444 23:15:14 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:50.444 23:15:14 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:50.444 23:15:14 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:50.444 23:15:14 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:50.444 23:15:14 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:50.444 23:15:14 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:50.444 23:15:14 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:50.444 23:15:14 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:50.444 23:15:14 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:50.444 23:15:14 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:50.445 23:15:14 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:50.445 23:15:14 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:50.445 23:15:14 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:50.445 23:15:14 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:50.445 23:15:14 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:50.445 23:15:14 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:50.445 23:15:14 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:50.445 23:15:14 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:50.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.445 --rc genhtml_branch_coverage=1 00:09:50.445 --rc genhtml_function_coverage=1 00:09:50.445 --rc genhtml_legend=1 00:09:50.445 --rc geninfo_all_blocks=1 00:09:50.445 --rc geninfo_unexecuted_blocks=1 00:09:50.445 00:09:50.445 ' 00:09:50.445 23:15:14 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:50.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.445 --rc genhtml_branch_coverage=1 00:09:50.445 --rc genhtml_function_coverage=1 00:09:50.445 --rc genhtml_legend=1 00:09:50.445 --rc geninfo_all_blocks=1 00:09:50.445 --rc geninfo_unexecuted_blocks=1 00:09:50.445 00:09:50.445 ' 00:09:50.445 23:15:14 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:50.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.445 --rc genhtml_branch_coverage=1 00:09:50.445 --rc genhtml_function_coverage=1 00:09:50.445 --rc genhtml_legend=1 00:09:50.445 --rc geninfo_all_blocks=1 00:09:50.445 --rc geninfo_unexecuted_blocks=1 00:09:50.445 00:09:50.445 ' 00:09:50.445 23:15:14 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:50.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.445 --rc genhtml_branch_coverage=1 00:09:50.445 --rc genhtml_function_coverage=1 00:09:50.445 --rc genhtml_legend=1 00:09:50.445 --rc geninfo_all_blocks=1 00:09:50.445 --rc geninfo_unexecuted_blocks=1 00:09:50.445 00:09:50.445 ' 00:09:50.445 23:15:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:50.445 23:15:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_77269 00:09:50.445 23:15:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_77269 00:09:50.445 23:15:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=77301 00:09:50.445 23:15:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:50.445 23:15:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 77301 00:09:50.445 23:15:14 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 77301 ']' 00:09:50.445 23:15:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:50.445 23:15:14 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.445 23:15:14 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:50.445 23:15:14 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.445 23:15:14 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:50.445 23:15:14 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:50.706 [2024-11-17 23:15:14.284588] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:50.706 [2024-11-17 23:15:14.284810] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77301 ] 00:09:50.706 [2024-11-17 23:15:14.430936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:50.706 [2024-11-17 23:15:14.450421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.706 [2024-11-17 23:15:14.450568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.647 23:15:15 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:51.647 23:15:15 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:51.647 23:15:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:51.647 Checking default timeout settings: 00:09:51.647 23:15:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:51.647 23:15:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:51.647 Making settings changes with rpc: 00:09:51.648 23:15:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:51.908 23:15:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:51.908 Check default vs. modified settings: 00:09:51.908 23:15:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:52.168 23:15:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:52.168 23:15:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:52.168 23:15:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_77269 00:09:52.168 23:15:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:52.168 23:15:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:52.168 23:15:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:52.168 23:15:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_77269 00:09:52.168 23:15:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:52.168 23:15:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:52.429 23:15:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:52.429 23:15:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:52.429 23:15:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:52.429 Setting action_on_timeout is changed as expected. 00:09:52.429 23:15:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:52.429 23:15:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_77269 00:09:52.429 23:15:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:52.429 23:15:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:52.429 23:15:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:52.429 23:15:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_77269 00:09:52.429 23:15:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:52.429 23:15:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:52.429 Setting timeout_us is changed as expected. 00:09:52.429 23:15:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:52.429 23:15:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:52.429 23:15:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:52.429 23:15:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:52.429 23:15:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:52.429 23:15:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_77269 00:09:52.429 23:15:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:52.429 23:15:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:52.429 23:15:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_77269 00:09:52.429 23:15:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:52.429 23:15:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:52.429 Setting timeout_admin_us is changed as expected. 00:09:52.429 23:15:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:52.429 23:15:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:52.429 23:15:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:52.429 23:15:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:52.429 23:15:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_77269 /tmp/settings_modified_77269 00:09:52.429 23:15:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 77301 00:09:52.429 23:15:16 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 77301 ']' 00:09:52.429 23:15:16 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 77301 00:09:52.429 23:15:16 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:52.429 23:15:16 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:52.429 23:15:16 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77301 00:09:52.429 killing process with pid 77301 00:09:52.429 23:15:16 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:52.429 23:15:16 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:52.429 23:15:16 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77301' 00:09:52.429 23:15:16 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 77301 00:09:52.429 23:15:16 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 77301 00:09:52.689 RPC TIMEOUT SETTING TEST PASSED. 00:09:52.689 23:15:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:52.689 00:09:52.689 real 0m2.214s 00:09:52.689 user 0m4.502s 00:09:52.689 sys 0m0.437s 00:09:52.689 ************************************ 00:09:52.689 END TEST nvme_rpc_timeouts 00:09:52.689 ************************************ 00:09:52.689 23:15:16 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.689 23:15:16 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:52.689 23:15:16 -- spdk/autotest.sh@239 -- # uname -s 00:09:52.689 23:15:16 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:52.689 23:15:16 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:52.689 23:15:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:52.689 23:15:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.689 23:15:16 -- common/autotest_common.sh@10 -- # set +x 00:09:52.689 ************************************ 00:09:52.689 START TEST sw_hotplug 00:09:52.689 ************************************ 00:09:52.689 23:15:16 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:52.689 * Looking for test storage... 00:09:52.689 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:52.689 23:15:16 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:52.689 23:15:16 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:09:52.689 23:15:16 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:52.689 23:15:16 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:52.689 23:15:16 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:52.689 23:15:16 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:52.689 23:15:16 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:52.689 23:15:16 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:52.689 23:15:16 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:52.689 23:15:16 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:52.689 23:15:16 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:52.689 23:15:16 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:52.689 23:15:16 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:52.689 23:15:16 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:52.689 23:15:16 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:52.689 23:15:16 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:52.689 23:15:16 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:52.689 23:15:16 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:52.689 23:15:16 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:52.689 23:15:16 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:52.689 23:15:16 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:52.689 23:15:16 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:52.689 23:15:16 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:52.689 23:15:16 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:52.689 23:15:16 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:52.689 23:15:16 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:52.689 23:15:16 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:52.689 23:15:16 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:52.689 23:15:16 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:52.689 23:15:16 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:52.689 23:15:16 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:52.689 23:15:16 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:52.689 23:15:16 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:52.689 23:15:16 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:52.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.689 --rc genhtml_branch_coverage=1 00:09:52.689 --rc genhtml_function_coverage=1 00:09:52.689 --rc genhtml_legend=1 00:09:52.689 --rc geninfo_all_blocks=1 00:09:52.689 --rc geninfo_unexecuted_blocks=1 00:09:52.689 00:09:52.689 ' 00:09:52.689 23:15:16 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:52.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.689 --rc genhtml_branch_coverage=1 00:09:52.689 --rc genhtml_function_coverage=1 00:09:52.689 --rc genhtml_legend=1 00:09:52.689 --rc geninfo_all_blocks=1 00:09:52.689 --rc geninfo_unexecuted_blocks=1 00:09:52.689 00:09:52.689 ' 00:09:52.689 23:15:16 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:52.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.689 --rc genhtml_branch_coverage=1 00:09:52.689 --rc genhtml_function_coverage=1 00:09:52.689 --rc genhtml_legend=1 00:09:52.689 --rc geninfo_all_blocks=1 00:09:52.689 --rc geninfo_unexecuted_blocks=1 00:09:52.689 00:09:52.689 ' 00:09:52.689 23:15:16 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:52.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.689 --rc genhtml_branch_coverage=1 00:09:52.689 --rc genhtml_function_coverage=1 00:09:52.689 --rc genhtml_legend=1 00:09:52.689 --rc geninfo_all_blocks=1 00:09:52.689 --rc geninfo_unexecuted_blocks=1 00:09:52.689 00:09:52.689 ' 00:09:52.689 23:15:16 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:52.949 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:53.209 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:53.209 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:53.209 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:53.209 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:53.209 23:15:16 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:53.209 23:15:16 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:53.209 23:15:16 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:53.209 23:15:16 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:53.209 23:15:16 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:53.210 23:15:16 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:53.210 23:15:16 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:53.210 23:15:16 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:53.210 23:15:16 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:53.210 23:15:16 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:53.210 23:15:16 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:53.210 23:15:16 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:53.210 23:15:16 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:53.210 23:15:16 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:53.210 23:15:16 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:53.210 23:15:16 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:53.210 23:15:16 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:53.210 23:15:16 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:53.210 23:15:16 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:53.210 23:15:16 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:53.210 23:15:16 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:53.210 23:15:16 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:53.210 23:15:16 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:53.210 23:15:16 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:53.210 23:15:16 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:53.210 23:15:16 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:53.210 23:15:16 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:53.210 23:15:16 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:53.210 23:15:16 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:53.210 23:15:16 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:53.469 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:53.728 Waiting for block devices as requested 00:09:53.728 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:53.728 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:53.728 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:53.728 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:59.012 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:59.012 23:15:22 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:59.012 23:15:22 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:59.269 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:59.269 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:59.269 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:59.528 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:59.788 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:59.788 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:00.049 23:15:23 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:00.049 23:15:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:00.049 23:15:23 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:00.049 23:15:23 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:00.049 23:15:23 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=78148 00:10:00.049 23:15:23 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:00.049 23:15:23 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:00.049 23:15:23 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:00.049 23:15:23 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:00.049 23:15:23 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:00.049 23:15:23 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:00.049 23:15:23 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:00.049 23:15:23 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:00.049 23:15:23 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:10:00.049 23:15:23 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:00.049 23:15:23 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:00.049 23:15:23 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:00.049 23:15:23 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:00.049 23:15:23 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:00.310 Initializing NVMe Controllers 00:10:00.310 Attaching to 0000:00:10.0 00:10:00.310 Attaching to 0000:00:11.0 00:10:00.310 Attached to 0000:00:11.0 00:10:00.310 Attached to 0000:00:10.0 00:10:00.310 Initialization complete. Starting I/O... 00:10:00.310 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:00.310 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:00.310 00:10:01.245 QEMU NVMe Ctrl (12341 ): 2486 I/Os completed (+2486) 00:10:01.245 QEMU NVMe Ctrl (12340 ): 2480 I/Os completed (+2480) 00:10:01.245 00:10:02.183 QEMU NVMe Ctrl (12341 ): 6053 I/Os completed (+3567) 00:10:02.183 QEMU NVMe Ctrl (12340 ): 6043 I/Os completed (+3563) 00:10:02.183 00:10:03.124 QEMU NVMe Ctrl (12341 ): 9608 I/Os completed (+3555) 00:10:03.124 QEMU NVMe Ctrl (12340 ): 9615 I/Os completed (+3572) 00:10:03.124 00:10:04.508 QEMU NVMe Ctrl (12341 ): 12553 I/Os completed (+2945) 00:10:04.508 QEMU NVMe Ctrl (12340 ): 12560 I/Os completed (+2945) 00:10:04.508 00:10:05.449 QEMU NVMe Ctrl (12341 ): 15321 I/Os completed (+2768) 00:10:05.449 QEMU NVMe Ctrl (12340 ): 15330 I/Os completed (+2770) 00:10:05.449 00:10:06.020 23:15:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:06.020 23:15:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:06.020 23:15:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:06.020 [2024-11-17 23:15:29.737010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:06.020 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:06.020 [2024-11-17 23:15:29.738671] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.020 [2024-11-17 23:15:29.738866] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.020 [2024-11-17 23:15:29.738967] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.020 [2024-11-17 23:15:29.739007] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.020 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:06.020 [2024-11-17 23:15:29.741073] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.020 [2024-11-17 23:15:29.741139] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.020 [2024-11-17 23:15:29.741157] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.020 [2024-11-17 23:15:29.741176] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.020 23:15:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:06.020 23:15:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:06.020 [2024-11-17 23:15:29.765910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:06.020 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:06.020 [2024-11-17 23:15:29.767013] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.020 [2024-11-17 23:15:29.767065] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.020 [2024-11-17 23:15:29.767085] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.020 [2024-11-17 23:15:29.767101] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.020 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:06.020 [2024-11-17 23:15:29.768398] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.020 [2024-11-17 23:15:29.768441] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.020 [2024-11-17 23:15:29.768479] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.020 [2024-11-17 23:15:29.768493] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.020 23:15:29 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:06.020 23:15:29 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:06.282 23:15:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:06.282 23:15:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:06.282 23:15:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:06.282 00:10:06.282 23:15:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:06.282 23:15:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:06.282 23:15:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:06.282 23:15:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:06.282 23:15:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:06.282 Attaching to 0000:00:10.0 00:10:06.282 Attached to 0000:00:10.0 00:10:06.282 23:15:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:06.282 23:15:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:06.282 23:15:30 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:06.282 Attaching to 0000:00:11.0 00:10:06.282 Attached to 0000:00:11.0 00:10:07.226 QEMU NVMe Ctrl (12340 ): 2644 I/Os completed (+2644) 00:10:07.226 QEMU NVMe Ctrl (12341 ): 2413 I/Os completed (+2413) 00:10:07.226 00:10:08.172 QEMU NVMe Ctrl (12340 ): 5560 I/Os completed (+2916) 00:10:08.173 QEMU NVMe Ctrl (12341 ): 5329 I/Os completed (+2916) 00:10:08.173 00:10:09.118 QEMU NVMe Ctrl (12340 ): 8470 I/Os completed (+2910) 00:10:09.118 QEMU NVMe Ctrl (12341 ): 8236 I/Os completed (+2907) 00:10:09.118 00:10:10.502 QEMU NVMe Ctrl (12340 ): 11900 I/Os completed (+3430) 00:10:10.502 QEMU NVMe Ctrl (12341 ): 11672 I/Os completed (+3436) 00:10:10.502 00:10:11.445 QEMU NVMe Ctrl (12340 ): 14764 I/Os completed (+2864) 00:10:11.445 QEMU NVMe Ctrl (12341 ): 14539 I/Os completed (+2867) 00:10:11.445 00:10:12.384 QEMU NVMe Ctrl (12340 ): 17850 I/Os completed (+3086) 00:10:12.384 QEMU NVMe Ctrl (12341 ): 17613 I/Os completed (+3074) 00:10:12.384 00:10:13.321 QEMU NVMe Ctrl (12340 ): 21380 I/Os completed (+3530) 00:10:13.321 QEMU NVMe Ctrl (12341 ): 21107 I/Os completed (+3494) 00:10:13.321 00:10:14.254 QEMU NVMe Ctrl (12340 ): 25128 I/Os completed (+3748) 00:10:14.254 QEMU NVMe Ctrl (12341 ): 24855 I/Os completed (+3748) 00:10:14.254 00:10:15.237 QEMU NVMe Ctrl (12340 ): 28828 I/Os completed (+3700) 00:10:15.237 QEMU NVMe Ctrl (12341 ): 28555 I/Os completed (+3700) 00:10:15.237 00:10:16.187 QEMU NVMe Ctrl (12340 ): 34228 I/Os completed (+5400) 00:10:16.187 QEMU NVMe Ctrl (12341 ): 35544 I/Os completed (+6989) 00:10:16.187 00:10:17.124 QEMU NVMe Ctrl (12340 ): 38704 I/Os completed (+4476) 00:10:17.124 QEMU NVMe Ctrl (12341 ): 40543 I/Os completed (+4999) 00:10:17.124 00:10:18.498 QEMU NVMe Ctrl (12340 ): 42997 I/Os completed (+4293) 00:10:18.498 QEMU NVMe Ctrl (12341 ): 44833 I/Os completed (+4290) 00:10:18.498 00:10:18.498 23:15:42 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:18.498 23:15:42 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:18.498 23:15:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:18.498 23:15:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:18.498 [2024-11-17 23:15:42.080764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:18.499 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:18.499 [2024-11-17 23:15:42.081564] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.499 [2024-11-17 23:15:42.081609] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.499 [2024-11-17 23:15:42.081622] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.499 [2024-11-17 23:15:42.081641] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.499 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:18.499 [2024-11-17 23:15:42.082914] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.499 [2024-11-17 23:15:42.083003] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.499 [2024-11-17 23:15:42.083031] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.499 [2024-11-17 23:15:42.083083] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.499 23:15:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:18.499 23:15:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:18.499 [2024-11-17 23:15:42.101591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:18.499 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:18.499 [2024-11-17 23:15:42.102435] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.499 [2024-11-17 23:15:42.102463] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.499 [2024-11-17 23:15:42.102477] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.499 [2024-11-17 23:15:42.102491] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.499 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:18.499 [2024-11-17 23:15:42.103354] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.499 [2024-11-17 23:15:42.103381] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.499 [2024-11-17 23:15:42.103395] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.499 [2024-11-17 23:15:42.103406] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.499 23:15:42 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:18.499 23:15:42 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:18.499 23:15:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:18.499 23:15:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:18.499 23:15:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:18.499 23:15:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:18.499 23:15:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:18.499 23:15:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:18.499 23:15:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:18.499 23:15:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:18.499 Attaching to 0000:00:10.0 00:10:18.499 Attached to 0000:00:10.0 00:10:18.499 23:15:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:18.757 23:15:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:18.757 23:15:42 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:18.757 Attaching to 0000:00:11.0 00:10:18.757 Attached to 0000:00:11.0 00:10:19.322 QEMU NVMe Ctrl (12340 ): 2704 I/Os completed (+2704) 00:10:19.322 QEMU NVMe Ctrl (12341 ): 2401 I/Os completed (+2401) 00:10:19.322 00:10:20.276 QEMU NVMe Ctrl (12340 ): 7003 I/Os completed (+4299) 00:10:20.276 QEMU NVMe Ctrl (12341 ): 6879 I/Os completed (+4478) 00:10:20.276 00:10:21.211 QEMU NVMe Ctrl (12340 ): 11879 I/Os completed (+4876) 00:10:21.211 QEMU NVMe Ctrl (12341 ): 11955 I/Os completed (+5076) 00:10:21.211 00:10:22.146 QEMU NVMe Ctrl (12340 ): 16082 I/Os completed (+4203) 00:10:22.146 QEMU NVMe Ctrl (12341 ): 16419 I/Os completed (+4464) 00:10:22.146 00:10:23.525 QEMU NVMe Ctrl (12340 ): 20464 I/Os completed (+4382) 00:10:23.525 QEMU NVMe Ctrl (12341 ): 20896 I/Os completed (+4477) 00:10:23.525 00:10:24.465 QEMU NVMe Ctrl (12340 ): 24650 I/Os completed (+4186) 00:10:24.465 QEMU NVMe Ctrl (12341 ): 25228 I/Os completed (+4332) 00:10:24.465 00:10:25.401 QEMU NVMe Ctrl (12340 ): 29020 I/Os completed (+4370) 00:10:25.401 QEMU NVMe Ctrl (12341 ): 29973 I/Os completed (+4745) 00:10:25.401 00:10:26.349 QEMU NVMe Ctrl (12340 ): 33317 I/Os completed (+4297) 00:10:26.349 QEMU NVMe Ctrl (12341 ): 34196 I/Os completed (+4223) 00:10:26.349 00:10:27.285 QEMU NVMe Ctrl (12340 ): 37858 I/Os completed (+4541) 00:10:27.285 QEMU NVMe Ctrl (12341 ): 38510 I/Os completed (+4314) 00:10:27.285 00:10:28.219 QEMU NVMe Ctrl (12340 ): 42263 I/Os completed (+4405) 00:10:28.219 QEMU NVMe Ctrl (12341 ): 42739 I/Os completed (+4229) 00:10:28.219 00:10:29.162 QEMU NVMe Ctrl (12340 ): 46817 I/Os completed (+4554) 00:10:29.162 QEMU NVMe Ctrl (12341 ): 47063 I/Os completed (+4324) 00:10:29.162 00:10:30.539 QEMU NVMe Ctrl (12340 ): 50915 I/Os completed (+4098) 00:10:30.539 QEMU NVMe Ctrl (12341 ): 51234 I/Os completed (+4171) 00:10:30.539 00:10:30.539 23:15:54 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:30.539 23:15:54 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:30.539 23:15:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:30.539 23:15:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:30.539 [2024-11-17 23:15:54.335484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:30.539 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:30.539 [2024-11-17 23:15:54.338207] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.539 [2024-11-17 23:15:54.338278] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.539 [2024-11-17 23:15:54.338305] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.539 [2024-11-17 23:15:54.338333] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.539 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:30.539 [2024-11-17 23:15:54.339564] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.539 [2024-11-17 23:15:54.339617] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.539 [2024-11-17 23:15:54.339641] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.539 [2024-11-17 23:15:54.339670] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.539 23:15:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:30.539 23:15:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:30.539 [2024-11-17 23:15:54.357187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:30.539 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:30.539 [2024-11-17 23:15:54.357984] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.539 [2024-11-17 23:15:54.358037] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.539 [2024-11-17 23:15:54.358066] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.539 [2024-11-17 23:15:54.358093] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.802 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:30.802 [2024-11-17 23:15:54.359019] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.802 [2024-11-17 23:15:54.359069] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.802 [2024-11-17 23:15:54.359093] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.802 [2024-11-17 23:15:54.359113] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.802 23:15:54 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:30.802 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:30.802 23:15:54 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:30.802 EAL: Scan for (pci) bus failed. 00:10:30.802 23:15:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:30.802 23:15:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:30.802 23:15:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:30.802 23:15:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:30.802 23:15:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:30.802 23:15:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:30.802 23:15:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:30.802 23:15:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:30.802 Attaching to 0000:00:10.0 00:10:30.802 Attached to 0000:00:10.0 00:10:30.802 23:15:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:30.802 23:15:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:30.802 23:15:54 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:30.802 Attaching to 0000:00:11.0 00:10:30.802 Attached to 0000:00:11.0 00:10:30.802 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:30.802 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:30.802 [2024-11-17 23:15:54.591616] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:43.110 23:16:06 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:43.110 23:16:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:43.110 23:16:06 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.86 00:10:43.110 23:16:06 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.86 00:10:43.110 23:16:06 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:43.110 23:16:06 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.86 00:10:43.110 23:16:06 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.86 2 00:10:43.110 remove_attach_helper took 42.86s to complete (handling 2 nvme drive(s)) 23:16:06 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:49.698 23:16:12 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 78148 00:10:49.698 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (78148) - No such process 00:10:49.698 23:16:12 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 78148 00:10:49.698 23:16:12 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:49.698 23:16:12 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:49.698 23:16:12 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:49.698 23:16:12 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=78699 00:10:49.698 23:16:12 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:49.698 23:16:12 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 78699 00:10:49.698 23:16:12 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 78699 ']' 00:10:49.698 23:16:12 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:49.698 23:16:12 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.698 23:16:12 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:49.698 23:16:12 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.698 23:16:12 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:49.698 23:16:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:49.698 [2024-11-17 23:16:12.668702] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:10:49.698 [2024-11-17 23:16:12.669105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78699 ] 00:10:49.698 [2024-11-17 23:16:12.814962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.698 [2024-11-17 23:16:12.839259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.699 23:16:13 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:49.699 23:16:13 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:49.699 23:16:13 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:49.699 23:16:13 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.699 23:16:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:49.699 23:16:13 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.699 23:16:13 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:49.699 23:16:13 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:49.699 23:16:13 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:49.699 23:16:13 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:49.699 23:16:13 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:49.699 23:16:13 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:49.699 23:16:13 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:49.699 23:16:13 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:49.699 23:16:13 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:49.699 23:16:13 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:49.699 23:16:13 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:49.699 23:16:13 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:49.699 23:16:13 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:56.261 23:16:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:56.261 23:16:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:56.261 23:16:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:56.261 23:16:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:56.261 23:16:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:56.261 23:16:19 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:56.261 23:16:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:56.261 23:16:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:56.261 23:16:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:56.261 23:16:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:56.261 23:16:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.261 23:16:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:56.261 23:16:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:56.261 23:16:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.261 23:16:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:56.262 23:16:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:56.262 [2024-11-17 23:16:19.557832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:56.262 [2024-11-17 23:16:19.558992] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.262 [2024-11-17 23:16:19.559144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:56.262 [2024-11-17 23:16:19.559165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:56.262 [2024-11-17 23:16:19.559184] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.262 [2024-11-17 23:16:19.559193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:56.262 [2024-11-17 23:16:19.559201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:56.262 [2024-11-17 23:16:19.559212] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.262 [2024-11-17 23:16:19.559218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:56.262 [2024-11-17 23:16:19.559227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:56.262 [2024-11-17 23:16:19.559234] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.262 [2024-11-17 23:16:19.559242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:56.262 [2024-11-17 23:16:19.559249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:56.262 23:16:20 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:56.262 23:16:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:56.262 23:16:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:56.262 23:16:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:56.262 23:16:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:56.262 23:16:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:56.262 23:16:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.262 23:16:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:56.262 [2024-11-17 23:16:20.057819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:56.262 [2024-11-17 23:16:20.058933] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.262 [2024-11-17 23:16:20.058963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:56.262 [2024-11-17 23:16:20.058974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:56.262 [2024-11-17 23:16:20.058986] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.262 [2024-11-17 23:16:20.058993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:56.262 [2024-11-17 23:16:20.059002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:56.262 [2024-11-17 23:16:20.059008] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.262 [2024-11-17 23:16:20.059016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:56.262 [2024-11-17 23:16:20.059023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:56.262 [2024-11-17 23:16:20.059034] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.262 [2024-11-17 23:16:20.059040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:56.262 [2024-11-17 23:16:20.059048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:56.262 23:16:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.520 23:16:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:56.520 23:16:20 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:56.520 23:16:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:56.520 23:16:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:56.520 23:16:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:56.520 23:16:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:56.520 23:16:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:56.521 23:16:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:56.521 23:16:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:56.521 23:16:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:56.521 23:16:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:56.521 23:16:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:56.521 23:16:20 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:08.784 23:16:32 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:08.784 23:16:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:08.784 23:16:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:08.784 23:16:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:08.784 23:16:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:08.784 23:16:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:08.784 23:16:32 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.784 23:16:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:08.784 23:16:32 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.784 23:16:32 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:08.784 23:16:32 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:08.784 23:16:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:08.785 23:16:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:08.785 23:16:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:08.785 23:16:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:08.785 23:16:32 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:08.785 23:16:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:08.785 23:16:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:08.785 23:16:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:08.785 23:16:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:08.785 23:16:32 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.785 23:16:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:08.785 23:16:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:08.785 23:16:32 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.785 23:16:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:08.785 23:16:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:08.785 [2024-11-17 23:16:32.458012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:08.785 [2024-11-17 23:16:32.459121] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.785 [2024-11-17 23:16:32.459154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.785 [2024-11-17 23:16:32.459167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.785 [2024-11-17 23:16:32.459182] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.785 [2024-11-17 23:16:32.459191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.785 [2024-11-17 23:16:32.459198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.785 [2024-11-17 23:16:32.459206] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.785 [2024-11-17 23:16:32.459213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.785 [2024-11-17 23:16:32.459221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.785 [2024-11-17 23:16:32.459227] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.785 [2024-11-17 23:16:32.459237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.785 [2024-11-17 23:16:32.459243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:09.351 23:16:32 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:09.351 23:16:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:09.351 23:16:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:09.351 23:16:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:09.351 23:16:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:09.352 23:16:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:09.352 23:16:32 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.352 23:16:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:09.352 23:16:32 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.352 23:16:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:09.352 23:16:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:09.352 [2024-11-17 23:16:33.058016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:09.352 [2024-11-17 23:16:33.059113] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.352 [2024-11-17 23:16:33.059145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:09.352 [2024-11-17 23:16:33.059156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:09.352 [2024-11-17 23:16:33.059169] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.352 [2024-11-17 23:16:33.059177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:09.352 [2024-11-17 23:16:33.059186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:09.352 [2024-11-17 23:16:33.059193] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.352 [2024-11-17 23:16:33.059218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:09.352 [2024-11-17 23:16:33.059225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:09.352 [2024-11-17 23:16:33.059235] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.352 [2024-11-17 23:16:33.059241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:09.352 [2024-11-17 23:16:33.059250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:09.918 23:16:33 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:09.918 23:16:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:09.918 23:16:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:09.918 23:16:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:09.918 23:16:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:09.918 23:16:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:09.918 23:16:33 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.918 23:16:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:09.918 23:16:33 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.918 23:16:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:09.918 23:16:33 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:09.918 23:16:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:09.918 23:16:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:09.918 23:16:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:09.918 23:16:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:09.918 23:16:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:09.918 23:16:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:09.918 23:16:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:09.918 23:16:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:10.176 23:16:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:10.176 23:16:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:10.176 23:16:33 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:22.374 23:16:45 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:22.374 23:16:45 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:22.374 23:16:45 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:22.374 23:16:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:22.374 23:16:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:22.374 23:16:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:22.374 23:16:45 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.374 23:16:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:22.374 23:16:45 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.374 23:16:45 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:22.374 23:16:45 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:22.374 23:16:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:22.374 23:16:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:22.374 23:16:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:22.374 23:16:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:22.374 23:16:45 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:22.374 23:16:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:22.374 23:16:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:22.374 23:16:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:22.374 23:16:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:22.374 23:16:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:22.374 23:16:45 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.374 23:16:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:22.374 [2024-11-17 23:16:45.858236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:22.374 [2024-11-17 23:16:45.859411] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.374 [2024-11-17 23:16:45.859442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:22.374 [2024-11-17 23:16:45.859458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:22.374 [2024-11-17 23:16:45.859473] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.375 [2024-11-17 23:16:45.859482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:22.375 [2024-11-17 23:16:45.859489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:22.375 [2024-11-17 23:16:45.859499] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.375 [2024-11-17 23:16:45.859506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:22.375 [2024-11-17 23:16:45.859514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:22.375 [2024-11-17 23:16:45.859521] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.375 [2024-11-17 23:16:45.859529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:22.375 [2024-11-17 23:16:45.859536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:22.375 23:16:45 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.375 23:16:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:22.375 23:16:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:22.633 [2024-11-17 23:16:46.258232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:22.633 [2024-11-17 23:16:46.259267] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.633 [2024-11-17 23:16:46.259296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:22.633 [2024-11-17 23:16:46.259306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:22.633 [2024-11-17 23:16:46.259318] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.633 [2024-11-17 23:16:46.259325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:22.633 [2024-11-17 23:16:46.259335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:22.633 [2024-11-17 23:16:46.259343] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.633 [2024-11-17 23:16:46.259352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:22.633 [2024-11-17 23:16:46.259358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:22.633 [2024-11-17 23:16:46.259367] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.633 [2024-11-17 23:16:46.259375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:22.634 [2024-11-17 23:16:46.259384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:22.634 23:16:46 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:22.634 23:16:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:22.634 23:16:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:22.634 23:16:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:22.634 23:16:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:22.634 23:16:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:22.634 23:16:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.634 23:16:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:22.634 23:16:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.634 23:16:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:22.634 23:16:46 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:22.891 23:16:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:22.891 23:16:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:22.891 23:16:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:22.891 23:16:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:22.891 23:16:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:22.891 23:16:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:22.891 23:16:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:22.891 23:16:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:22.891 23:16:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:22.891 23:16:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:22.891 23:16:46 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:35.098 23:16:58 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:35.098 23:16:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:35.098 23:16:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:35.098 23:16:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:35.098 23:16:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:35.098 23:16:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:35.098 23:16:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.098 23:16:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:35.098 23:16:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.098 23:16:58 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:35.098 23:16:58 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:35.098 23:16:58 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.21 00:11:35.098 23:16:58 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.21 00:11:35.098 23:16:58 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:35.098 23:16:58 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.21 00:11:35.098 23:16:58 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.21 2 00:11:35.098 remove_attach_helper took 45.21s to complete (handling 2 nvme drive(s)) 23:16:58 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:35.098 23:16:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.098 23:16:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:35.098 23:16:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.098 23:16:58 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:35.098 23:16:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.098 23:16:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:35.098 23:16:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.098 23:16:58 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:35.098 23:16:58 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:35.098 23:16:58 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:35.098 23:16:58 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:35.098 23:16:58 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:35.098 23:16:58 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:35.098 23:16:58 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:35.098 23:16:58 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:35.098 23:16:58 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:35.098 23:16:58 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:35.098 23:16:58 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:35.098 23:16:58 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:35.098 23:16:58 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:41.721 23:17:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:41.721 23:17:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:41.721 23:17:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:41.721 23:17:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:41.721 23:17:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:41.721 23:17:04 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:41.721 23:17:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:41.721 23:17:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:41.721 23:17:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:41.721 23:17:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:41.721 23:17:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:41.721 23:17:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.721 23:17:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:41.721 23:17:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.721 23:17:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:41.721 23:17:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:41.721 [2024-11-17 23:17:04.795620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:41.721 [2024-11-17 23:17:04.796471] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:41.721 [2024-11-17 23:17:04.796493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:41.721 [2024-11-17 23:17:04.796506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:41.721 [2024-11-17 23:17:04.796521] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:41.721 [2024-11-17 23:17:04.796531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:41.721 [2024-11-17 23:17:04.796538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:41.721 [2024-11-17 23:17:04.796547] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:41.721 [2024-11-17 23:17:04.796554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:41.721 [2024-11-17 23:17:04.796567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:41.721 [2024-11-17 23:17:04.796573] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:41.721 [2024-11-17 23:17:04.796581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:41.721 [2024-11-17 23:17:04.796588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:41.721 [2024-11-17 23:17:05.195615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:41.721 [2024-11-17 23:17:05.196543] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:41.721 [2024-11-17 23:17:05.196576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:41.721 [2024-11-17 23:17:05.196588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:41.721 [2024-11-17 23:17:05.196600] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:41.721 [2024-11-17 23:17:05.196608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:41.721 [2024-11-17 23:17:05.196617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:41.721 [2024-11-17 23:17:05.196625] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:41.721 [2024-11-17 23:17:05.196634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:41.721 [2024-11-17 23:17:05.196642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:41.721 [2024-11-17 23:17:05.196651] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:41.721 [2024-11-17 23:17:05.196658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:41.721 [2024-11-17 23:17:05.196670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:41.721 23:17:05 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:41.721 23:17:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:41.721 23:17:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:41.721 23:17:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:41.721 23:17:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:41.721 23:17:05 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.721 23:17:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:41.721 23:17:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:41.721 23:17:05 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.721 23:17:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:41.721 23:17:05 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:41.721 23:17:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:41.721 23:17:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:41.721 23:17:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:41.721 23:17:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:41.721 23:17:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:41.721 23:17:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:41.721 23:17:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:41.721 23:17:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:41.980 23:17:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:41.980 23:17:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:41.980 23:17:05 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:54.176 23:17:17 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:54.177 23:17:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:54.177 23:17:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:54.177 23:17:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:54.177 23:17:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:54.177 23:17:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:54.177 23:17:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.177 23:17:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:54.177 23:17:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.177 23:17:17 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:54.177 23:17:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:54.177 23:17:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:54.177 23:17:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:54.177 23:17:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:54.177 23:17:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:54.177 23:17:17 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:54.177 23:17:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:54.177 23:17:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:54.177 23:17:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:54.177 23:17:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:54.177 23:17:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:54.177 23:17:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.177 23:17:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:54.177 23:17:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.177 23:17:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:54.177 23:17:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:54.177 [2024-11-17 23:17:17.695827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:54.177 [2024-11-17 23:17:17.696794] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.177 [2024-11-17 23:17:17.696899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.177 [2024-11-17 23:17:17.696964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.177 [2024-11-17 23:17:17.696998] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.177 [2024-11-17 23:17:17.697016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.177 [2024-11-17 23:17:17.697041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.177 [2024-11-17 23:17:17.697066] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.177 [2024-11-17 23:17:17.697126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.177 [2024-11-17 23:17:17.697156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.177 [2024-11-17 23:17:17.697179] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.177 [2024-11-17 23:17:17.697196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.177 [2024-11-17 23:17:17.697219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.435 [2024-11-17 23:17:18.095834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:54.435 [2024-11-17 23:17:18.096768] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.435 [2024-11-17 23:17:18.096873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.435 [2024-11-17 23:17:18.096945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.435 [2024-11-17 23:17:18.096981] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.435 [2024-11-17 23:17:18.097001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.435 [2024-11-17 23:17:18.097028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.435 [2024-11-17 23:17:18.097052] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.435 [2024-11-17 23:17:18.097107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.435 [2024-11-17 23:17:18.097133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.435 [2024-11-17 23:17:18.097159] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.435 [2024-11-17 23:17:18.097175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.435 [2024-11-17 23:17:18.097199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.435 23:17:18 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:54.435 23:17:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:54.435 23:17:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:54.435 23:17:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:54.435 23:17:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:54.435 23:17:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:54.435 23:17:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.435 23:17:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:54.435 23:17:18 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.435 23:17:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:54.435 23:17:18 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:54.694 23:17:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:54.694 23:17:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:54.694 23:17:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:54.694 23:17:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:54.694 23:17:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:54.694 23:17:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:54.694 23:17:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:54.694 23:17:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:54.694 23:17:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:54.694 23:17:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:54.694 23:17:18 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:06.895 23:17:30 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:06.895 23:17:30 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:06.895 23:17:30 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:06.895 23:17:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:06.895 23:17:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:06.895 23:17:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:06.895 23:17:30 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.895 23:17:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:06.895 23:17:30 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.895 23:17:30 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:06.895 23:17:30 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:06.895 23:17:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:06.895 23:17:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:06.895 23:17:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:06.896 23:17:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:06.896 23:17:30 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:06.896 23:17:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:06.896 23:17:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:06.896 23:17:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:06.896 23:17:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:06.896 23:17:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:06.896 23:17:30 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.896 23:17:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:06.896 23:17:30 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.896 23:17:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:06.896 23:17:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:06.896 [2024-11-17 23:17:30.596060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:06.896 [2024-11-17 23:17:30.596869] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.896 [2024-11-17 23:17:30.596912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.896 [2024-11-17 23:17:30.596927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.896 [2024-11-17 23:17:30.596940] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.896 [2024-11-17 23:17:30.596952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.896 [2024-11-17 23:17:30.596960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.896 [2024-11-17 23:17:30.596970] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.896 [2024-11-17 23:17:30.596977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.896 [2024-11-17 23:17:30.596986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.896 [2024-11-17 23:17:30.596992] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.896 [2024-11-17 23:17:30.597000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.896 [2024-11-17 23:17:30.597007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.461 [2024-11-17 23:17:30.996044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:07.461 [2024-11-17 23:17:30.996797] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.461 [2024-11-17 23:17:30.996831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.461 [2024-11-17 23:17:30.996841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.461 [2024-11-17 23:17:30.996852] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.461 [2024-11-17 23:17:30.996859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.461 [2024-11-17 23:17:30.996868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.461 [2024-11-17 23:17:30.996875] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.461 [2024-11-17 23:17:30.996898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.461 [2024-11-17 23:17:30.996906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.461 [2024-11-17 23:17:30.996914] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.461 [2024-11-17 23:17:30.996921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.461 [2024-11-17 23:17:30.996930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.461 23:17:31 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:07.461 23:17:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:07.461 23:17:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:07.461 23:17:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:07.461 23:17:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:07.461 23:17:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:07.461 23:17:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.461 23:17:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:07.461 23:17:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.462 23:17:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:07.462 23:17:31 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:07.462 23:17:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:07.462 23:17:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:07.462 23:17:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:07.721 23:17:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:07.721 23:17:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:07.721 23:17:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:07.721 23:17:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:07.721 23:17:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:07.721 23:17:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:07.721 23:17:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:07.721 23:17:31 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:19.938 23:17:43 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:19.938 23:17:43 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:19.938 23:17:43 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:19.938 23:17:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:19.939 23:17:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:19.939 23:17:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:19.939 23:17:43 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.939 23:17:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:19.939 23:17:43 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.939 23:17:43 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:19.939 23:17:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:19.939 23:17:43 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.76 00:12:19.939 23:17:43 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.76 00:12:19.939 23:17:43 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:19.939 23:17:43 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.76 00:12:19.939 23:17:43 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.76 2 00:12:19.939 remove_attach_helper took 44.76s to complete (handling 2 nvme drive(s)) 23:17:43 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:19.939 23:17:43 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 78699 00:12:19.939 23:17:43 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 78699 ']' 00:12:19.939 23:17:43 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 78699 00:12:19.939 23:17:43 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:12:19.939 23:17:43 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:19.939 23:17:43 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78699 00:12:19.939 killing process with pid 78699 00:12:19.939 23:17:43 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:19.939 23:17:43 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:19.939 23:17:43 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78699' 00:12:19.939 23:17:43 sw_hotplug -- common/autotest_common.sh@973 -- # kill 78699 00:12:19.939 23:17:43 sw_hotplug -- common/autotest_common.sh@978 -- # wait 78699 00:12:20.198 23:17:43 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:20.459 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:20.720 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:20.720 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:20.720 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:20.981 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:20.981 ************************************ 00:12:20.981 END TEST sw_hotplug 00:12:20.981 ************************************ 00:12:20.981 00:12:20.981 real 2m28.306s 00:12:20.981 user 1m49.657s 00:12:20.981 sys 0m17.204s 00:12:20.981 23:17:44 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:20.981 23:17:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:20.981 23:17:44 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:20.981 23:17:44 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:20.981 23:17:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:20.981 23:17:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:20.981 23:17:44 -- common/autotest_common.sh@10 -- # set +x 00:12:20.981 ************************************ 00:12:20.981 START TEST nvme_xnvme 00:12:20.981 ************************************ 00:12:20.981 23:17:44 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:20.981 * Looking for test storage... 00:12:20.981 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:20.981 23:17:44 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:20.981 23:17:44 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:20.981 23:17:44 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:20.981 23:17:44 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:20.981 23:17:44 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:20.981 23:17:44 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:20.981 23:17:44 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:20.981 23:17:44 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:20.981 23:17:44 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:20.981 23:17:44 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:20.981 23:17:44 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:20.981 23:17:44 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:20.981 23:17:44 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:20.981 23:17:44 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:20.981 23:17:44 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:20.981 23:17:44 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:20.981 23:17:44 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:20.981 23:17:44 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:20.981 23:17:44 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:20.981 23:17:44 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:20.981 23:17:44 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:20.981 23:17:44 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:20.981 23:17:44 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:20.981 23:17:44 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:21.240 23:17:44 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:21.240 23:17:44 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:21.240 23:17:44 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:21.240 23:17:44 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:21.240 23:17:44 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:21.240 23:17:44 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:21.240 23:17:44 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:21.240 23:17:44 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:21.240 23:17:44 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:21.240 23:17:44 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:21.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.240 --rc genhtml_branch_coverage=1 00:12:21.240 --rc genhtml_function_coverage=1 00:12:21.240 --rc genhtml_legend=1 00:12:21.240 --rc geninfo_all_blocks=1 00:12:21.240 --rc geninfo_unexecuted_blocks=1 00:12:21.240 00:12:21.240 ' 00:12:21.240 23:17:44 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:21.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.240 --rc genhtml_branch_coverage=1 00:12:21.240 --rc genhtml_function_coverage=1 00:12:21.240 --rc genhtml_legend=1 00:12:21.240 --rc geninfo_all_blocks=1 00:12:21.240 --rc geninfo_unexecuted_blocks=1 00:12:21.240 00:12:21.240 ' 00:12:21.240 23:17:44 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:21.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.240 --rc genhtml_branch_coverage=1 00:12:21.240 --rc genhtml_function_coverage=1 00:12:21.240 --rc genhtml_legend=1 00:12:21.240 --rc geninfo_all_blocks=1 00:12:21.240 --rc geninfo_unexecuted_blocks=1 00:12:21.240 00:12:21.240 ' 00:12:21.240 23:17:44 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:21.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.240 --rc genhtml_branch_coverage=1 00:12:21.240 --rc genhtml_function_coverage=1 00:12:21.240 --rc genhtml_legend=1 00:12:21.240 --rc geninfo_all_blocks=1 00:12:21.240 --rc geninfo_unexecuted_blocks=1 00:12:21.240 00:12:21.240 ' 00:12:21.240 23:17:44 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:21.240 23:17:44 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:21.240 23:17:44 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:21.240 23:17:44 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:21.240 23:17:44 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:21.240 23:17:44 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.240 23:17:44 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.240 23:17:44 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.240 23:17:44 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:21.240 23:17:44 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.240 23:17:44 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:21.240 23:17:44 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:21.240 23:17:44 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:21.240 23:17:44 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:21.240 ************************************ 00:12:21.240 START TEST xnvme_to_malloc_dd_copy 00:12:21.240 ************************************ 00:12:21.240 23:17:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1129 -- # malloc_to_xnvme_copy 00:12:21.240 23:17:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:21.240 23:17:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:21.240 23:17:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:21.240 23:17:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:12:21.240 23:17:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:21.240 23:17:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:21.240 23:17:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:21.240 23:17:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:21.240 23:17:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:21.240 23:17:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:21.240 23:17:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:21.240 23:17:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:21.240 23:17:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:21.240 23:17:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:21.240 23:17:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:21.240 23:17:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:21.240 23:17:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:21.240 23:17:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:21.240 23:17:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:21.240 23:17:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:21.240 23:17:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:21.240 23:17:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:21.240 23:17:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:21.240 23:17:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:21.240 { 00:12:21.240 "subsystems": [ 00:12:21.240 { 00:12:21.240 "subsystem": "bdev", 00:12:21.240 "config": [ 00:12:21.240 { 00:12:21.240 "params": { 00:12:21.241 "block_size": 512, 00:12:21.241 "num_blocks": 2097152, 00:12:21.241 "name": "malloc0" 00:12:21.241 }, 00:12:21.241 "method": "bdev_malloc_create" 00:12:21.241 }, 00:12:21.241 { 00:12:21.241 "params": { 00:12:21.241 "io_mechanism": "libaio", 00:12:21.241 "filename": "/dev/nullb0", 00:12:21.241 "name": "null0" 00:12:21.241 }, 00:12:21.241 "method": "bdev_xnvme_create" 00:12:21.241 }, 00:12:21.241 { 00:12:21.241 "method": "bdev_wait_for_examine" 00:12:21.241 } 00:12:21.241 ] 00:12:21.241 } 00:12:21.241 ] 00:12:21.241 } 00:12:21.241 [2024-11-17 23:17:44.909309] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:12:21.241 [2024-11-17 23:17:44.909546] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80064 ] 00:12:21.241 [2024-11-17 23:17:45.055466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.500 [2024-11-17 23:17:45.096136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.881  [2024-11-17T23:17:47.642Z] Copying: 218/1024 [MB] (218 MBps) [2024-11-17T23:17:49.018Z] Copying: 436/1024 [MB] (218 MBps) [2024-11-17T23:17:49.954Z] Copying: 705/1024 [MB] (268 MBps) [2024-11-17T23:17:49.954Z] Copying: 1006/1024 [MB] (300 MBps) [2024-11-17T23:17:50.214Z] Copying: 1024/1024 [MB] (average 252 MBps) 00:12:26.393 00:12:26.393 23:17:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:26.393 23:17:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:26.393 23:17:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:26.393 23:17:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:26.393 { 00:12:26.393 "subsystems": [ 00:12:26.393 { 00:12:26.393 "subsystem": "bdev", 00:12:26.393 "config": [ 00:12:26.393 { 00:12:26.393 "params": { 00:12:26.393 "block_size": 512, 00:12:26.393 "num_blocks": 2097152, 00:12:26.393 "name": "malloc0" 00:12:26.393 }, 00:12:26.393 "method": "bdev_malloc_create" 00:12:26.393 }, 00:12:26.393 { 00:12:26.393 "params": { 00:12:26.393 "io_mechanism": "libaio", 00:12:26.393 "filename": "/dev/nullb0", 00:12:26.393 "name": "null0" 00:12:26.394 }, 00:12:26.394 "method": "bdev_xnvme_create" 00:12:26.394 }, 00:12:26.394 { 00:12:26.394 "method": "bdev_wait_for_examine" 00:12:26.394 } 00:12:26.394 ] 00:12:26.394 } 00:12:26.394 ] 00:12:26.394 } 00:12:26.394 [2024-11-17 23:17:50.163326] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:12:26.394 [2024-11-17 23:17:50.163411] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80125 ] 00:12:26.653 [2024-11-17 23:17:50.301913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.653 [2024-11-17 23:17:50.324635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.035  [2024-11-17T23:17:52.790Z] Copying: 302/1024 [MB] (302 MBps) [2024-11-17T23:17:53.744Z] Copying: 604/1024 [MB] (302 MBps) [2024-11-17T23:17:54.311Z] Copying: 905/1024 [MB] (301 MBps) [2024-11-17T23:17:54.571Z] Copying: 1024/1024 [MB] (average 301 MBps) 00:12:30.750 00:12:30.750 23:17:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:30.750 23:17:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:30.750 23:17:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:30.750 23:17:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:30.750 23:17:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:30.750 23:17:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:30.750 { 00:12:30.750 "subsystems": [ 00:12:30.750 { 00:12:30.750 "subsystem": "bdev", 00:12:30.751 "config": [ 00:12:30.751 { 00:12:30.751 "params": { 00:12:30.751 "block_size": 512, 00:12:30.751 "num_blocks": 2097152, 00:12:30.751 "name": "malloc0" 00:12:30.751 }, 00:12:30.751 "method": "bdev_malloc_create" 00:12:30.751 }, 00:12:30.751 { 00:12:30.751 "params": { 00:12:30.751 "io_mechanism": "io_uring", 00:12:30.751 "filename": "/dev/nullb0", 00:12:30.751 "name": "null0" 00:12:30.751 }, 00:12:30.751 "method": "bdev_xnvme_create" 00:12:30.751 }, 00:12:30.751 { 00:12:30.751 "method": "bdev_wait_for_examine" 00:12:30.751 } 00:12:30.751 ] 00:12:30.751 } 00:12:30.751 ] 00:12:30.751 } 00:12:30.751 [2024-11-17 23:17:54.514773] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:12:30.751 [2024-11-17 23:17:54.514908] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80184 ] 00:12:31.009 [2024-11-17 23:17:54.657309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.009 [2024-11-17 23:17:54.680911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.383  [2024-11-17T23:17:57.149Z] Copying: 309/1024 [MB] (309 MBps) [2024-11-17T23:17:58.089Z] Copying: 620/1024 [MB] (310 MBps) [2024-11-17T23:17:58.348Z] Copying: 930/1024 [MB] (310 MBps) [2024-11-17T23:17:58.916Z] Copying: 1024/1024 [MB] (average 310 MBps) 00:12:35.095 00:12:35.095 23:17:58 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:35.095 23:17:58 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:35.095 23:17:58 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:35.095 23:17:58 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:35.095 { 00:12:35.095 "subsystems": [ 00:12:35.095 { 00:12:35.095 "subsystem": "bdev", 00:12:35.095 "config": [ 00:12:35.095 { 00:12:35.095 "params": { 00:12:35.095 "block_size": 512, 00:12:35.095 "num_blocks": 2097152, 00:12:35.095 "name": "malloc0" 00:12:35.095 }, 00:12:35.095 "method": "bdev_malloc_create" 00:12:35.095 }, 00:12:35.095 { 00:12:35.095 "params": { 00:12:35.095 "io_mechanism": "io_uring", 00:12:35.095 "filename": "/dev/nullb0", 00:12:35.095 "name": "null0" 00:12:35.095 }, 00:12:35.095 "method": "bdev_xnvme_create" 00:12:35.095 }, 00:12:35.095 { 00:12:35.095 "method": "bdev_wait_for_examine" 00:12:35.095 } 00:12:35.095 ] 00:12:35.095 } 00:12:35.095 ] 00:12:35.095 } 00:12:35.095 [2024-11-17 23:17:58.753862] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:12:35.095 [2024-11-17 23:17:58.754120] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80235 ] 00:12:35.095 [2024-11-17 23:17:58.889722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.353 [2024-11-17 23:17:58.923169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.730  [2024-11-17T23:18:01.485Z] Copying: 310/1024 [MB] (310 MBps) [2024-11-17T23:18:02.419Z] Copying: 624/1024 [MB] (313 MBps) [2024-11-17T23:18:02.677Z] Copying: 937/1024 [MB] (313 MBps) [2024-11-17T23:18:02.938Z] Copying: 1024/1024 [MB] (average 312 MBps) 00:12:39.117 00:12:39.117 23:18:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:39.117 23:18:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:39.378 00:12:39.378 real 0m18.147s 00:12:39.378 user 0m14.792s 00:12:39.378 sys 0m2.818s 00:12:39.378 23:18:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:39.378 ************************************ 00:12:39.378 END TEST xnvme_to_malloc_dd_copy 00:12:39.378 ************************************ 00:12:39.378 23:18:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:39.378 23:18:03 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:39.378 23:18:03 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:39.378 23:18:03 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:39.378 23:18:03 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:39.378 ************************************ 00:12:39.378 START TEST xnvme_bdevperf 00:12:39.378 ************************************ 00:12:39.378 23:18:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:39.378 23:18:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:39.378 23:18:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:39.378 23:18:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:39.378 23:18:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:12:39.378 23:18:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:39.378 23:18:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:39.378 23:18:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:12:39.378 23:18:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:39.378 23:18:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:39.378 23:18:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:39.378 23:18:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:39.378 23:18:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:39.378 23:18:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:39.378 23:18:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:39.378 23:18:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:39.378 23:18:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:39.378 23:18:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:39.378 23:18:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:39.378 23:18:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:39.378 23:18:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:39.378 { 00:12:39.379 "subsystems": [ 00:12:39.379 { 00:12:39.379 "subsystem": "bdev", 00:12:39.379 "config": [ 00:12:39.379 { 00:12:39.379 "params": { 00:12:39.379 "io_mechanism": "libaio", 00:12:39.379 "filename": "/dev/nullb0", 00:12:39.379 "name": "null0" 00:12:39.379 }, 00:12:39.379 "method": "bdev_xnvme_create" 00:12:39.379 }, 00:12:39.379 { 00:12:39.379 "method": "bdev_wait_for_examine" 00:12:39.379 } 00:12:39.379 ] 00:12:39.379 } 00:12:39.379 ] 00:12:39.379 } 00:12:39.379 [2024-11-17 23:18:03.117116] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:12:39.379 [2024-11-17 23:18:03.117226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80317 ] 00:12:39.638 [2024-11-17 23:18:03.257201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.638 [2024-11-17 23:18:03.286188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.638 Running I/O for 5 seconds... 00:12:42.037 205440.00 IOPS, 802.50 MiB/s [2024-11-17T23:18:06.426Z] 205728.00 IOPS, 803.62 MiB/s [2024-11-17T23:18:07.803Z] 205781.33 IOPS, 803.83 MiB/s [2024-11-17T23:18:08.739Z] 205888.00 IOPS, 804.25 MiB/s 00:12:44.918 Latency(us) 00:12:44.918 [2024-11-17T23:18:08.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:44.918 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:44.918 null0 : 5.00 205888.77 804.25 0.00 0.00 308.65 115.00 1524.97 00:12:44.918 [2024-11-17T23:18:08.739Z] =================================================================================================================== 00:12:44.918 [2024-11-17T23:18:08.739Z] Total : 205888.77 804.25 0.00 0.00 308.65 115.00 1524.97 00:12:44.918 23:18:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:44.918 23:18:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:44.918 23:18:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:44.918 23:18:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:44.918 23:18:08 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:44.918 23:18:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:44.918 { 00:12:44.918 "subsystems": [ 00:12:44.918 { 00:12:44.918 "subsystem": "bdev", 00:12:44.918 "config": [ 00:12:44.918 { 00:12:44.918 "params": { 00:12:44.918 "io_mechanism": "io_uring", 00:12:44.918 "filename": "/dev/nullb0", 00:12:44.918 "name": "null0" 00:12:44.918 }, 00:12:44.918 "method": "bdev_xnvme_create" 00:12:44.918 }, 00:12:44.918 { 00:12:44.918 "method": "bdev_wait_for_examine" 00:12:44.918 } 00:12:44.918 ] 00:12:44.918 } 00:12:44.918 ] 00:12:44.918 } 00:12:44.918 [2024-11-17 23:18:08.620813] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:12:44.918 [2024-11-17 23:18:08.621127] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80385 ] 00:12:45.175 [2024-11-17 23:18:08.761923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.175 [2024-11-17 23:18:08.784357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.175 Running I/O for 5 seconds... 00:12:47.483 236160.00 IOPS, 922.50 MiB/s [2024-11-17T23:18:12.238Z] 235872.00 IOPS, 921.38 MiB/s [2024-11-17T23:18:13.170Z] 235776.00 IOPS, 921.00 MiB/s [2024-11-17T23:18:14.103Z] 234928.00 IOPS, 917.69 MiB/s 00:12:50.282 Latency(us) 00:12:50.282 [2024-11-17T23:18:14.103Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:50.282 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:50.282 null0 : 5.00 234088.40 914.41 0.00 0.00 271.38 149.66 1531.27 00:12:50.282 [2024-11-17T23:18:14.103Z] =================================================================================================================== 00:12:50.282 [2024-11-17T23:18:14.103Z] Total : 234088.40 914.41 0.00 0.00 271.38 149.66 1531.27 00:12:50.283 23:18:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:12:50.283 23:18:14 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:50.283 ************************************ 00:12:50.283 END TEST xnvme_bdevperf 00:12:50.283 ************************************ 00:12:50.283 00:12:50.283 real 0m11.032s 00:12:50.283 user 0m8.691s 00:12:50.283 sys 0m2.109s 00:12:50.283 23:18:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:50.283 23:18:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:50.542 ************************************ 00:12:50.542 END TEST nvme_xnvme 00:12:50.542 ************************************ 00:12:50.542 00:12:50.542 real 0m29.447s 00:12:50.542 user 0m23.591s 00:12:50.542 sys 0m5.054s 00:12:50.542 23:18:14 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:50.542 23:18:14 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:50.542 23:18:14 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:50.542 23:18:14 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:50.542 23:18:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:50.542 23:18:14 -- common/autotest_common.sh@10 -- # set +x 00:12:50.542 ************************************ 00:12:50.542 START TEST blockdev_xnvme 00:12:50.542 ************************************ 00:12:50.542 23:18:14 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:50.542 * Looking for test storage... 00:12:50.542 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:50.542 23:18:14 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:50.542 23:18:14 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:50.542 23:18:14 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:50.542 23:18:14 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:50.542 23:18:14 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:50.542 23:18:14 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:50.542 23:18:14 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:50.542 23:18:14 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:50.542 23:18:14 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:50.542 23:18:14 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:50.542 23:18:14 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:50.542 23:18:14 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:50.542 23:18:14 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:50.542 23:18:14 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:50.542 23:18:14 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:50.542 23:18:14 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:50.542 23:18:14 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:12:50.542 23:18:14 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:50.542 23:18:14 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:50.542 23:18:14 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:50.542 23:18:14 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:50.542 23:18:14 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:50.542 23:18:14 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:50.542 23:18:14 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:50.542 23:18:14 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:50.542 23:18:14 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:50.542 23:18:14 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:50.542 23:18:14 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:50.542 23:18:14 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:50.542 23:18:14 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:50.542 23:18:14 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:50.542 23:18:14 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:12:50.542 23:18:14 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:50.542 23:18:14 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:50.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.542 --rc genhtml_branch_coverage=1 00:12:50.542 --rc genhtml_function_coverage=1 00:12:50.542 --rc genhtml_legend=1 00:12:50.542 --rc geninfo_all_blocks=1 00:12:50.542 --rc geninfo_unexecuted_blocks=1 00:12:50.542 00:12:50.542 ' 00:12:50.542 23:18:14 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:50.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.542 --rc genhtml_branch_coverage=1 00:12:50.542 --rc genhtml_function_coverage=1 00:12:50.542 --rc genhtml_legend=1 00:12:50.542 --rc geninfo_all_blocks=1 00:12:50.542 --rc geninfo_unexecuted_blocks=1 00:12:50.542 00:12:50.542 ' 00:12:50.542 23:18:14 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:50.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.542 --rc genhtml_branch_coverage=1 00:12:50.542 --rc genhtml_function_coverage=1 00:12:50.542 --rc genhtml_legend=1 00:12:50.542 --rc geninfo_all_blocks=1 00:12:50.542 --rc geninfo_unexecuted_blocks=1 00:12:50.542 00:12:50.542 ' 00:12:50.542 23:18:14 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:50.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.542 --rc genhtml_branch_coverage=1 00:12:50.542 --rc genhtml_function_coverage=1 00:12:50.542 --rc genhtml_legend=1 00:12:50.542 --rc geninfo_all_blocks=1 00:12:50.542 --rc geninfo_unexecuted_blocks=1 00:12:50.542 00:12:50.542 ' 00:12:50.542 23:18:14 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:50.542 23:18:14 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:12:50.542 23:18:14 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:50.542 23:18:14 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:50.543 23:18:14 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:50.543 23:18:14 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:50.543 23:18:14 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:50.543 23:18:14 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:50.543 23:18:14 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:12:50.543 23:18:14 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:12:50.543 23:18:14 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:12:50.543 23:18:14 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:12:50.543 23:18:14 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:12:50.543 23:18:14 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:12:50.543 23:18:14 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:12:50.543 23:18:14 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:12:50.543 23:18:14 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:12:50.543 23:18:14 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:12:50.543 23:18:14 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:12:50.543 23:18:14 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:12:50.543 23:18:14 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:12:50.543 23:18:14 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:12:50.543 23:18:14 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:12:50.543 23:18:14 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:12:50.543 23:18:14 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=80522 00:12:50.543 23:18:14 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:50.543 23:18:14 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 80522 00:12:50.543 23:18:14 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 80522 ']' 00:12:50.543 23:18:14 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:50.543 23:18:14 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.543 23:18:14 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:50.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.543 23:18:14 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.543 23:18:14 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:50.543 23:18:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:50.802 [2024-11-17 23:18:14.411361] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:12:50.802 [2024-11-17 23:18:14.411577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80522 ] 00:12:50.802 [2024-11-17 23:18:14.549968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.802 [2024-11-17 23:18:14.573969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.737 23:18:15 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:51.737 23:18:15 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:12:51.737 23:18:15 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:12:51.737 23:18:15 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:12:51.737 23:18:15 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:12:51.737 23:18:15 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:12:51.737 23:18:15 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:51.737 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:51.996 Waiting for block devices as requested 00:12:51.996 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:52.254 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:52.254 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:52.254 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:57.522 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:57.522 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:57.522 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:57.522 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:12:57.522 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:57.522 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:57.522 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:57.522 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:12:57.522 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:57.522 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:57.522 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:57.522 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:12:57.522 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:57.522 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:57.522 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:57.522 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:12:57.522 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:57.522 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:57.522 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:57.522 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:12:57.522 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:57.522 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:57.522 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:57.522 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:12:57.522 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:57.522 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:57.522 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:12:57.522 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:57.522 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:12:57.522 nvme0n1 00:12:57.522 nvme1n1 00:12:57.522 nvme2n1 00:12:57.522 nvme2n2 00:12:57.522 nvme2n3 00:12:57.522 nvme3n1 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.522 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.522 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:12:57.522 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.522 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.522 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.522 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:12:57.522 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:12:57.522 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:57.522 23:18:21 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.522 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:12:57.522 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:12:57.523 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "f1ed5be0-596d-4a32-9bd6-d3d3f2de16d0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "f1ed5be0-596d-4a32-9bd6-d3d3f2de16d0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "ede3cdda-f159-4960-a85c-0c9d8ddf0efd"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "ede3cdda-f159-4960-a85c-0c9d8ddf0efd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "7c4eaa10-31fd-401c-90ff-eceb38f5450f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7c4eaa10-31fd-401c-90ff-eceb38f5450f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "4b688e4d-66da-4eb3-84ae-969bdd9b2ac4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4b688e4d-66da-4eb3-84ae-969bdd9b2ac4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "ab8e5bbf-325f-4857-a561-627001b48b4c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ab8e5bbf-325f-4857-a561-627001b48b4c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "f427c0b0-a041-4521-b48e-ec668166149c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "f427c0b0-a041-4521-b48e-ec668166149c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:12:57.523 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:12:57.523 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:12:57.523 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:12:57.523 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 80522 00:12:57.523 23:18:21 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 80522 ']' 00:12:57.523 23:18:21 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 80522 00:12:57.523 23:18:21 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:12:57.523 23:18:21 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:57.523 23:18:21 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80522 00:12:57.523 killing process with pid 80522 00:12:57.523 23:18:21 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:57.523 23:18:21 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:57.523 23:18:21 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80522' 00:12:57.523 23:18:21 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 80522 00:12:57.523 23:18:21 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 80522 00:12:57.781 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:57.781 23:18:21 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:57.781 23:18:21 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:57.781 23:18:21 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:57.781 23:18:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:57.781 ************************************ 00:12:57.781 START TEST bdev_hello_world 00:12:57.781 ************************************ 00:12:57.781 23:18:21 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:58.039 [2024-11-17 23:18:21.649586] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:12:58.039 [2024-11-17 23:18:21.649688] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80871 ] 00:12:58.039 [2024-11-17 23:18:21.784556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.039 [2024-11-17 23:18:21.808786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.297 [2024-11-17 23:18:21.986622] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:58.297 [2024-11-17 23:18:21.986827] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:12:58.297 [2024-11-17 23:18:21.986856] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:58.297 [2024-11-17 23:18:21.988700] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:58.297 [2024-11-17 23:18:21.988855] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:58.297 [2024-11-17 23:18:21.988873] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:58.297 [2024-11-17 23:18:21.989049] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:58.297 00:12:58.297 [2024-11-17 23:18:21.989073] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:58.556 ************************************ 00:12:58.556 END TEST bdev_hello_world 00:12:58.556 ************************************ 00:12:58.556 00:12:58.556 real 0m0.547s 00:12:58.556 user 0m0.288s 00:12:58.556 sys 0m0.148s 00:12:58.556 23:18:22 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:58.556 23:18:22 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:58.556 23:18:22 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:12:58.556 23:18:22 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:58.556 23:18:22 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:58.556 23:18:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:58.556 ************************************ 00:12:58.556 START TEST bdev_bounds 00:12:58.556 ************************************ 00:12:58.556 23:18:22 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:12:58.556 23:18:22 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=80891 00:12:58.556 Process bdevio pid: 80891 00:12:58.556 23:18:22 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:58.556 23:18:22 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 80891' 00:12:58.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.556 23:18:22 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 80891 00:12:58.556 23:18:22 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:58.556 23:18:22 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 80891 ']' 00:12:58.556 23:18:22 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.556 23:18:22 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:58.556 23:18:22 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.556 23:18:22 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:58.556 23:18:22 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:58.556 [2024-11-17 23:18:22.252725] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:12:58.556 [2024-11-17 23:18:22.252841] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80891 ] 00:12:58.814 [2024-11-17 23:18:22.393307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:58.815 [2024-11-17 23:18:22.418299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.815 [2024-11-17 23:18:22.418357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:58.815 [2024-11-17 23:18:22.418317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.380 23:18:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:59.380 23:18:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:12:59.380 23:18:23 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:59.380 I/O targets: 00:12:59.380 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:59.380 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:59.380 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:59.380 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:59.380 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:59.380 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:59.380 00:12:59.380 00:12:59.380 CUnit - A unit testing framework for C - Version 2.1-3 00:12:59.380 http://cunit.sourceforge.net/ 00:12:59.380 00:12:59.380 00:12:59.380 Suite: bdevio tests on: nvme3n1 00:12:59.380 Test: blockdev write read block ...passed 00:12:59.380 Test: blockdev write zeroes read block ...passed 00:12:59.380 Test: blockdev write zeroes read no split ...passed 00:12:59.380 Test: blockdev write zeroes read split ...passed 00:12:59.642 Test: blockdev write zeroes read split partial ...passed 00:12:59.642 Test: blockdev reset ...passed 00:12:59.642 Test: blockdev write read 8 blocks ...passed 00:12:59.642 Test: blockdev write read size > 128k ...passed 00:12:59.642 Test: blockdev write read invalid size ...passed 00:12:59.642 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:59.642 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:59.642 Test: blockdev write read max offset ...passed 00:12:59.642 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:59.642 Test: blockdev writev readv 8 blocks ...passed 00:12:59.642 Test: blockdev writev readv 30 x 1block ...passed 00:12:59.642 Test: blockdev writev readv block ...passed 00:12:59.642 Test: blockdev writev readv size > 128k ...passed 00:12:59.642 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:59.642 Test: blockdev comparev and writev ...passed 00:12:59.642 Test: blockdev nvme passthru rw ...passed 00:12:59.642 Test: blockdev nvme passthru vendor specific ...passed 00:12:59.642 Test: blockdev nvme admin passthru ...passed 00:12:59.642 Test: blockdev copy ...passed 00:12:59.642 Suite: bdevio tests on: nvme2n3 00:12:59.642 Test: blockdev write read block ...passed 00:12:59.642 Test: blockdev write zeroes read block ...passed 00:12:59.642 Test: blockdev write zeroes read no split ...passed 00:12:59.642 Test: blockdev write zeroes read split ...passed 00:12:59.642 Test: blockdev write zeroes read split partial ...passed 00:12:59.642 Test: blockdev reset ...passed 00:12:59.642 Test: blockdev write read 8 blocks ...passed 00:12:59.642 Test: blockdev write read size > 128k ...passed 00:12:59.642 Test: blockdev write read invalid size ...passed 00:12:59.642 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:59.642 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:59.642 Test: blockdev write read max offset ...passed 00:12:59.642 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:59.642 Test: blockdev writev readv 8 blocks ...passed 00:12:59.642 Test: blockdev writev readv 30 x 1block ...passed 00:12:59.642 Test: blockdev writev readv block ...passed 00:12:59.642 Test: blockdev writev readv size > 128k ...passed 00:12:59.642 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:59.642 Test: blockdev comparev and writev ...passed 00:12:59.642 Test: blockdev nvme passthru rw ...passed 00:12:59.642 Test: blockdev nvme passthru vendor specific ...passed 00:12:59.642 Test: blockdev nvme admin passthru ...passed 00:12:59.642 Test: blockdev copy ...passed 00:12:59.642 Suite: bdevio tests on: nvme2n2 00:12:59.642 Test: blockdev write read block ...passed 00:12:59.642 Test: blockdev write zeroes read block ...passed 00:12:59.642 Test: blockdev write zeroes read no split ...passed 00:12:59.642 Test: blockdev write zeroes read split ...passed 00:12:59.642 Test: blockdev write zeroes read split partial ...passed 00:12:59.642 Test: blockdev reset ...passed 00:12:59.642 Test: blockdev write read 8 blocks ...passed 00:12:59.642 Test: blockdev write read size > 128k ...passed 00:12:59.642 Test: blockdev write read invalid size ...passed 00:12:59.642 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:59.642 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:59.642 Test: blockdev write read max offset ...passed 00:12:59.642 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:59.642 Test: blockdev writev readv 8 blocks ...passed 00:12:59.642 Test: blockdev writev readv 30 x 1block ...passed 00:12:59.642 Test: blockdev writev readv block ...passed 00:12:59.642 Test: blockdev writev readv size > 128k ...passed 00:12:59.642 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:59.642 Test: blockdev comparev and writev ...passed 00:12:59.642 Test: blockdev nvme passthru rw ...passed 00:12:59.642 Test: blockdev nvme passthru vendor specific ...passed 00:12:59.642 Test: blockdev nvme admin passthru ...passed 00:12:59.642 Test: blockdev copy ...passed 00:12:59.642 Suite: bdevio tests on: nvme2n1 00:12:59.642 Test: blockdev write read block ...passed 00:12:59.642 Test: blockdev write zeroes read block ...passed 00:12:59.642 Test: blockdev write zeroes read no split ...passed 00:12:59.642 Test: blockdev write zeroes read split ...passed 00:12:59.642 Test: blockdev write zeroes read split partial ...passed 00:12:59.642 Test: blockdev reset ...passed 00:12:59.642 Test: blockdev write read 8 blocks ...passed 00:12:59.642 Test: blockdev write read size > 128k ...passed 00:12:59.642 Test: blockdev write read invalid size ...passed 00:12:59.642 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:59.642 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:59.642 Test: blockdev write read max offset ...passed 00:12:59.642 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:59.642 Test: blockdev writev readv 8 blocks ...passed 00:12:59.642 Test: blockdev writev readv 30 x 1block ...passed 00:12:59.642 Test: blockdev writev readv block ...passed 00:12:59.642 Test: blockdev writev readv size > 128k ...passed 00:12:59.642 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:59.642 Test: blockdev comparev and writev ...passed 00:12:59.642 Test: blockdev nvme passthru rw ...passed 00:12:59.642 Test: blockdev nvme passthru vendor specific ...passed 00:12:59.642 Test: blockdev nvme admin passthru ...passed 00:12:59.642 Test: blockdev copy ...passed 00:12:59.642 Suite: bdevio tests on: nvme1n1 00:12:59.642 Test: blockdev write read block ...passed 00:12:59.642 Test: blockdev write zeroes read block ...passed 00:12:59.642 Test: blockdev write zeroes read no split ...passed 00:12:59.642 Test: blockdev write zeroes read split ...passed 00:12:59.642 Test: blockdev write zeroes read split partial ...passed 00:12:59.642 Test: blockdev reset ...passed 00:12:59.642 Test: blockdev write read 8 blocks ...passed 00:12:59.642 Test: blockdev write read size > 128k ...passed 00:12:59.642 Test: blockdev write read invalid size ...passed 00:12:59.642 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:59.642 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:59.642 Test: blockdev write read max offset ...passed 00:12:59.642 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:59.642 Test: blockdev writev readv 8 blocks ...passed 00:12:59.642 Test: blockdev writev readv 30 x 1block ...passed 00:12:59.642 Test: blockdev writev readv block ...passed 00:12:59.642 Test: blockdev writev readv size > 128k ...passed 00:12:59.642 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:59.642 Test: blockdev comparev and writev ...passed 00:12:59.642 Test: blockdev nvme passthru rw ...passed 00:12:59.642 Test: blockdev nvme passthru vendor specific ...passed 00:12:59.642 Test: blockdev nvme admin passthru ...passed 00:12:59.642 Test: blockdev copy ...passed 00:12:59.642 Suite: bdevio tests on: nvme0n1 00:12:59.642 Test: blockdev write read block ...passed 00:12:59.642 Test: blockdev write zeroes read block ...passed 00:12:59.642 Test: blockdev write zeroes read no split ...passed 00:12:59.642 Test: blockdev write zeroes read split ...passed 00:12:59.642 Test: blockdev write zeroes read split partial ...passed 00:12:59.642 Test: blockdev reset ...passed 00:12:59.642 Test: blockdev write read 8 blocks ...passed 00:12:59.642 Test: blockdev write read size > 128k ...passed 00:12:59.642 Test: blockdev write read invalid size ...passed 00:12:59.642 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:59.642 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:59.642 Test: blockdev write read max offset ...passed 00:12:59.642 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:59.642 Test: blockdev writev readv 8 blocks ...passed 00:12:59.642 Test: blockdev writev readv 30 x 1block ...passed 00:12:59.642 Test: blockdev writev readv block ...passed 00:12:59.642 Test: blockdev writev readv size > 128k ...passed 00:12:59.642 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:59.642 Test: blockdev comparev and writev ...passed 00:12:59.642 Test: blockdev nvme passthru rw ...passed 00:12:59.642 Test: blockdev nvme passthru vendor specific ...passed 00:12:59.642 Test: blockdev nvme admin passthru ...passed 00:12:59.642 Test: blockdev copy ...passed 00:12:59.642 00:12:59.642 Run Summary: Type Total Ran Passed Failed Inactive 00:12:59.642 suites 6 6 n/a 0 0 00:12:59.642 tests 138 138 138 0 0 00:12:59.642 asserts 780 780 780 0 n/a 00:12:59.642 00:12:59.642 Elapsed time = 0.437 seconds 00:12:59.642 0 00:12:59.642 23:18:23 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 80891 00:12:59.642 23:18:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 80891 ']' 00:12:59.642 23:18:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 80891 00:12:59.642 23:18:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:12:59.642 23:18:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:59.642 23:18:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80891 00:12:59.642 killing process with pid 80891 00:12:59.642 23:18:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:59.642 23:18:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:59.642 23:18:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80891' 00:12:59.642 23:18:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 80891 00:12:59.642 23:18:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 80891 00:12:59.903 23:18:23 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:12:59.903 00:12:59.903 real 0m1.396s 00:12:59.903 user 0m3.505s 00:12:59.903 sys 0m0.278s 00:12:59.903 23:18:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:59.903 ************************************ 00:12:59.903 END TEST bdev_bounds 00:12:59.903 ************************************ 00:12:59.903 23:18:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:59.903 23:18:23 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:59.903 23:18:23 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:59.903 23:18:23 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:59.903 23:18:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:59.903 ************************************ 00:12:59.903 START TEST bdev_nbd 00:12:59.903 ************************************ 00:12:59.903 23:18:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:59.903 23:18:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:12:59.903 23:18:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:12:59.903 23:18:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:59.903 23:18:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:59.903 23:18:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:59.903 23:18:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:12:59.903 23:18:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:12:59.903 23:18:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:12:59.903 23:18:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:59.903 23:18:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:12:59.903 23:18:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:12:59.903 23:18:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:59.903 23:18:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:12:59.903 23:18:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:59.903 23:18:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:12:59.903 23:18:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=80947 00:12:59.903 23:18:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:59.903 23:18:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 80947 /var/tmp/spdk-nbd.sock 00:12:59.903 23:18:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:59.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:59.903 23:18:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 80947 ']' 00:12:59.903 23:18:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:59.903 23:18:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:59.903 23:18:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:59.903 23:18:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:59.903 23:18:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:59.903 [2024-11-17 23:18:23.700503] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:12:59.903 [2024-11-17 23:18:23.700597] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:00.162 [2024-11-17 23:18:23.838290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.162 [2024-11-17 23:18:23.861819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.097 23:18:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:01.097 23:18:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:13:01.097 23:18:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:01.097 23:18:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:01.097 23:18:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:01.097 23:18:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:01.097 23:18:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:01.097 23:18:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:01.097 23:18:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:01.097 23:18:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:01.097 23:18:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:01.097 23:18:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:01.097 23:18:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:01.097 23:18:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:01.097 23:18:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:13:01.097 23:18:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:01.097 23:18:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:01.097 23:18:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:01.097 23:18:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:01.097 23:18:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:01.097 23:18:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:01.097 23:18:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:01.097 23:18:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:01.097 23:18:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:01.097 23:18:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:01.097 23:18:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:01.097 23:18:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:01.097 1+0 records in 00:13:01.097 1+0 records out 00:13:01.097 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103803 s, 3.9 MB/s 00:13:01.097 23:18:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.097 23:18:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:01.097 23:18:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.097 23:18:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:01.097 23:18:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:01.097 23:18:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:01.097 23:18:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:01.097 23:18:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:13:01.355 23:18:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:01.355 23:18:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:01.355 23:18:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:01.355 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:01.355 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:01.355 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:01.355 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:01.355 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:01.355 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:01.355 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:01.355 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:01.355 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:01.355 1+0 records in 00:13:01.355 1+0 records out 00:13:01.355 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000774803 s, 5.3 MB/s 00:13:01.355 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.355 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:01.355 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.355 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:01.355 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:01.355 23:18:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:01.355 23:18:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:01.355 23:18:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:13:01.614 23:18:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:01.614 23:18:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:01.614 23:18:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:01.614 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:13:01.614 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:01.614 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:01.614 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:01.614 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:13:01.614 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:01.614 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:01.614 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:01.614 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:01.614 1+0 records in 00:13:01.614 1+0 records out 00:13:01.614 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000667157 s, 6.1 MB/s 00:13:01.614 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.614 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:01.614 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.614 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:01.614 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:01.615 23:18:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:01.615 23:18:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:01.615 23:18:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:13:01.873 23:18:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:01.873 23:18:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:01.873 23:18:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:01.873 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:13:01.873 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:01.873 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:01.873 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:01.873 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:13:01.873 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:01.873 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:01.873 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:01.873 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:01.873 1+0 records in 00:13:01.873 1+0 records out 00:13:01.873 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000842656 s, 4.9 MB/s 00:13:01.873 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.873 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:01.873 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.873 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:01.873 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:01.873 23:18:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:01.873 23:18:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:01.873 23:18:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:13:01.873 23:18:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:01.873 23:18:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:01.873 23:18:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:01.873 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:13:01.873 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:01.873 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:01.873 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:01.873 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:13:01.873 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:01.873 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:01.873 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:01.873 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:01.873 1+0 records in 00:13:01.873 1+0 records out 00:13:01.873 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000721172 s, 5.7 MB/s 00:13:01.873 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.873 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:01.873 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.873 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:01.873 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:01.873 23:18:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:01.873 23:18:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:01.873 23:18:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:13:02.131 23:18:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:02.131 23:18:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:02.131 23:18:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:02.131 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:13:02.131 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:02.131 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:02.131 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:02.131 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:13:02.131 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:02.131 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:02.131 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:02.131 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:02.131 1+0 records in 00:13:02.131 1+0 records out 00:13:02.131 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000687223 s, 6.0 MB/s 00:13:02.131 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.131 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:02.131 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.131 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:02.131 23:18:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:02.131 23:18:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:02.131 23:18:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:02.131 23:18:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:02.389 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:02.389 { 00:13:02.389 "nbd_device": "/dev/nbd0", 00:13:02.389 "bdev_name": "nvme0n1" 00:13:02.389 }, 00:13:02.389 { 00:13:02.389 "nbd_device": "/dev/nbd1", 00:13:02.389 "bdev_name": "nvme1n1" 00:13:02.389 }, 00:13:02.389 { 00:13:02.389 "nbd_device": "/dev/nbd2", 00:13:02.389 "bdev_name": "nvme2n1" 00:13:02.389 }, 00:13:02.389 { 00:13:02.389 "nbd_device": "/dev/nbd3", 00:13:02.389 "bdev_name": "nvme2n2" 00:13:02.389 }, 00:13:02.389 { 00:13:02.389 "nbd_device": "/dev/nbd4", 00:13:02.389 "bdev_name": "nvme2n3" 00:13:02.389 }, 00:13:02.389 { 00:13:02.389 "nbd_device": "/dev/nbd5", 00:13:02.389 "bdev_name": "nvme3n1" 00:13:02.389 } 00:13:02.389 ]' 00:13:02.389 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:02.389 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:02.389 { 00:13:02.389 "nbd_device": "/dev/nbd0", 00:13:02.389 "bdev_name": "nvme0n1" 00:13:02.389 }, 00:13:02.389 { 00:13:02.389 "nbd_device": "/dev/nbd1", 00:13:02.389 "bdev_name": "nvme1n1" 00:13:02.389 }, 00:13:02.389 { 00:13:02.389 "nbd_device": "/dev/nbd2", 00:13:02.389 "bdev_name": "nvme2n1" 00:13:02.389 }, 00:13:02.389 { 00:13:02.389 "nbd_device": "/dev/nbd3", 00:13:02.389 "bdev_name": "nvme2n2" 00:13:02.390 }, 00:13:02.390 { 00:13:02.390 "nbd_device": "/dev/nbd4", 00:13:02.390 "bdev_name": "nvme2n3" 00:13:02.390 }, 00:13:02.390 { 00:13:02.390 "nbd_device": "/dev/nbd5", 00:13:02.390 "bdev_name": "nvme3n1" 00:13:02.390 } 00:13:02.390 ]' 00:13:02.390 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:02.390 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:02.390 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:02.390 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:02.390 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:02.390 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:02.390 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:02.390 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:02.648 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:02.648 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:02.648 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:02.648 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:02.648 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:02.648 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:02.648 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:02.648 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:02.648 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:02.648 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:02.907 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:02.907 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:02.907 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:02.907 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:02.907 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:02.907 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:02.907 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:02.907 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:02.907 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:02.907 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:03.166 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:03.166 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:03.166 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:03.166 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.166 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.166 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:03.166 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:03.166 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.166 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.166 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:03.166 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:03.166 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:03.166 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:03.166 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.166 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.166 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:03.166 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:03.166 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.166 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.166 23:18:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:03.425 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:03.425 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:03.425 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:03.425 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.425 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.425 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:03.425 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:03.425 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.425 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.425 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:03.682 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:03.683 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:03.683 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:03.683 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.683 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.683 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:03.683 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:03.683 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.683 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:03.683 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:03.683 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:03.941 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:03.941 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:03.941 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:03.941 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:03.941 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:03.941 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:03.941 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:03.941 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:03.941 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:03.941 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:03.941 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:03.941 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:03.941 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:03.941 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:03.941 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:03.941 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:03.941 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:03.941 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:03.941 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:03.941 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:03.941 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:03.941 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:03.941 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:03.941 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:03.941 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:03.941 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:03.941 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:03.941 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:13:04.202 /dev/nbd0 00:13:04.202 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:04.202 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:04.202 23:18:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:04.202 23:18:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:04.202 23:18:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:04.202 23:18:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:04.202 23:18:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:04.202 23:18:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:04.202 23:18:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:04.202 23:18:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:04.202 23:18:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:04.202 1+0 records in 00:13:04.202 1+0 records out 00:13:04.202 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105621 s, 3.9 MB/s 00:13:04.202 23:18:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.202 23:18:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:04.202 23:18:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.202 23:18:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:04.202 23:18:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:04.202 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:04.202 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:04.202 23:18:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:13:04.463 /dev/nbd1 00:13:04.463 23:18:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:04.463 23:18:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:04.463 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:04.463 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:04.463 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:04.463 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:04.463 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:04.463 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:04.463 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:04.463 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:04.463 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:04.463 1+0 records in 00:13:04.463 1+0 records out 00:13:04.463 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000667369 s, 6.1 MB/s 00:13:04.463 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.463 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:04.463 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.463 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:04.463 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:04.463 23:18:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:04.463 23:18:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:04.463 23:18:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:13:04.724 /dev/nbd10 00:13:04.724 23:18:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:04.724 23:18:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:04.724 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:13:04.724 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:04.724 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:04.724 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:04.724 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:13:04.724 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:04.724 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:04.724 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:04.724 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:04.724 1+0 records in 00:13:04.724 1+0 records out 00:13:04.724 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000795201 s, 5.2 MB/s 00:13:04.724 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.724 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:04.724 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.724 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:04.724 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:04.724 23:18:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:04.724 23:18:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:04.724 23:18:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:13:04.986 /dev/nbd11 00:13:04.986 23:18:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:04.986 23:18:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:04.986 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:13:04.986 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:04.986 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:04.986 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:04.986 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:13:04.986 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:04.986 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:04.986 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:04.986 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:04.986 1+0 records in 00:13:04.986 1+0 records out 00:13:04.986 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000900449 s, 4.5 MB/s 00:13:04.986 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.986 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:04.986 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.986 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:04.986 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:04.986 23:18:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:04.986 23:18:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:04.986 23:18:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:13:04.986 /dev/nbd12 00:13:05.248 23:18:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:05.248 23:18:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:05.248 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:13:05.248 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:05.248 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:05.248 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:05.248 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:13:05.248 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:05.248 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:05.248 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:05.248 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:05.248 1+0 records in 00:13:05.248 1+0 records out 00:13:05.248 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000785933 s, 5.2 MB/s 00:13:05.248 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.248 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:05.248 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.248 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:05.248 23:18:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:05.248 23:18:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:05.248 23:18:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:05.249 23:18:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:05.249 /dev/nbd13 00:13:05.510 23:18:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:05.510 23:18:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:05.510 23:18:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:13:05.510 23:18:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:05.510 23:18:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:05.510 23:18:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:05.510 23:18:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:13:05.510 23:18:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:05.510 23:18:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:05.510 23:18:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:05.510 23:18:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:05.510 1+0 records in 00:13:05.510 1+0 records out 00:13:05.510 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000446213 s, 9.2 MB/s 00:13:05.510 23:18:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.510 23:18:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:05.510 23:18:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.510 23:18:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:05.510 23:18:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:05.510 23:18:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:05.510 23:18:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:05.510 23:18:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:05.510 23:18:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:05.510 23:18:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:05.510 23:18:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:05.510 { 00:13:05.510 "nbd_device": "/dev/nbd0", 00:13:05.510 "bdev_name": "nvme0n1" 00:13:05.510 }, 00:13:05.510 { 00:13:05.510 "nbd_device": "/dev/nbd1", 00:13:05.510 "bdev_name": "nvme1n1" 00:13:05.510 }, 00:13:05.510 { 00:13:05.510 "nbd_device": "/dev/nbd10", 00:13:05.510 "bdev_name": "nvme2n1" 00:13:05.510 }, 00:13:05.510 { 00:13:05.510 "nbd_device": "/dev/nbd11", 00:13:05.510 "bdev_name": "nvme2n2" 00:13:05.510 }, 00:13:05.510 { 00:13:05.510 "nbd_device": "/dev/nbd12", 00:13:05.510 "bdev_name": "nvme2n3" 00:13:05.510 }, 00:13:05.510 { 00:13:05.510 "nbd_device": "/dev/nbd13", 00:13:05.510 "bdev_name": "nvme3n1" 00:13:05.510 } 00:13:05.510 ]' 00:13:05.510 23:18:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:05.510 23:18:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:05.510 { 00:13:05.510 "nbd_device": "/dev/nbd0", 00:13:05.510 "bdev_name": "nvme0n1" 00:13:05.510 }, 00:13:05.510 { 00:13:05.510 "nbd_device": "/dev/nbd1", 00:13:05.510 "bdev_name": "nvme1n1" 00:13:05.510 }, 00:13:05.510 { 00:13:05.510 "nbd_device": "/dev/nbd10", 00:13:05.510 "bdev_name": "nvme2n1" 00:13:05.510 }, 00:13:05.510 { 00:13:05.510 "nbd_device": "/dev/nbd11", 00:13:05.510 "bdev_name": "nvme2n2" 00:13:05.510 }, 00:13:05.510 { 00:13:05.510 "nbd_device": "/dev/nbd12", 00:13:05.510 "bdev_name": "nvme2n3" 00:13:05.510 }, 00:13:05.510 { 00:13:05.510 "nbd_device": "/dev/nbd13", 00:13:05.510 "bdev_name": "nvme3n1" 00:13:05.510 } 00:13:05.510 ]' 00:13:05.771 23:18:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:05.771 /dev/nbd1 00:13:05.771 /dev/nbd10 00:13:05.771 /dev/nbd11 00:13:05.771 /dev/nbd12 00:13:05.771 /dev/nbd13' 00:13:05.771 23:18:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:05.771 /dev/nbd1 00:13:05.771 /dev/nbd10 00:13:05.771 /dev/nbd11 00:13:05.771 /dev/nbd12 00:13:05.771 /dev/nbd13' 00:13:05.771 23:18:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:05.771 23:18:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:05.771 23:18:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:05.771 23:18:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:05.771 23:18:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:05.771 23:18:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:05.771 23:18:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:05.771 23:18:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:05.771 23:18:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:05.771 23:18:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:05.771 23:18:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:05.771 23:18:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:05.771 256+0 records in 00:13:05.771 256+0 records out 00:13:05.772 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119319 s, 87.9 MB/s 00:13:05.772 23:18:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:05.772 23:18:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:06.033 256+0 records in 00:13:06.033 256+0 records out 00:13:06.033 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.224324 s, 4.7 MB/s 00:13:06.033 23:18:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:06.033 23:18:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:06.294 256+0 records in 00:13:06.294 256+0 records out 00:13:06.294 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.308081 s, 3.4 MB/s 00:13:06.294 23:18:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:06.294 23:18:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:06.294 256+0 records in 00:13:06.294 256+0 records out 00:13:06.294 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.163512 s, 6.4 MB/s 00:13:06.294 23:18:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:06.294 23:18:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:06.555 256+0 records in 00:13:06.555 256+0 records out 00:13:06.555 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.142958 s, 7.3 MB/s 00:13:06.555 23:18:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:06.555 23:18:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:06.817 256+0 records in 00:13:06.817 256+0 records out 00:13:06.817 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.240547 s, 4.4 MB/s 00:13:06.817 23:18:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:06.817 23:18:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:07.078 256+0 records in 00:13:07.078 256+0 records out 00:13:07.078 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.253097 s, 4.1 MB/s 00:13:07.078 23:18:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:07.078 23:18:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:07.078 23:18:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:07.078 23:18:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:07.078 23:18:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:07.078 23:18:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:07.078 23:18:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:07.078 23:18:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:07.078 23:18:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:07.078 23:18:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:07.078 23:18:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:07.078 23:18:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:07.078 23:18:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:07.078 23:18:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:07.078 23:18:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:07.078 23:18:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:07.078 23:18:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:07.078 23:18:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:07.078 23:18:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:07.078 23:18:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:07.078 23:18:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:07.078 23:18:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:07.078 23:18:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:07.078 23:18:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:07.078 23:18:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:07.078 23:18:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.078 23:18:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:07.340 23:18:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:07.340 23:18:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:07.340 23:18:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:07.340 23:18:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:07.340 23:18:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.340 23:18:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:07.340 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:07.340 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:07.340 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.340 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:07.601 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:07.601 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:07.601 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:07.601 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:07.601 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.601 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:07.601 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:07.601 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:07.601 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.601 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:07.867 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:07.867 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:07.867 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:07.867 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:07.867 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.867 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:07.867 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:07.867 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:07.867 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.867 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:07.867 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:07.867 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:07.867 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:07.867 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:07.867 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.867 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:08.124 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:08.124 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:08.124 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:08.124 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:08.124 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:08.124 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:08.124 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:08.125 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:08.125 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:08.125 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:08.125 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:08.125 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:08.125 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:08.125 23:18:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:08.382 23:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:08.382 23:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:08.383 23:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:08.383 23:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:08.383 23:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:08.383 23:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:08.383 23:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:08.383 23:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:08.383 23:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:08.383 23:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:08.383 23:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:08.706 23:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:08.706 23:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:08.706 23:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:08.706 23:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:08.706 23:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:08.706 23:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:08.706 23:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:08.706 23:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:08.706 23:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:08.706 23:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:08.706 23:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:08.706 23:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:08.706 23:18:32 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:08.707 23:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:08.707 23:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:08.707 23:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:09.025 malloc_lvol_verify 00:13:09.025 23:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:09.025 0002c07c-5a55-496b-95ee-c4a13bfc69da 00:13:09.025 23:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:09.284 86dcd3c4-4dbd-4dd3-8ccb-f04cb6bfedac 00:13:09.284 23:18:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:09.543 /dev/nbd0 00:13:09.543 23:18:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:09.543 23:18:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:09.543 23:18:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:09.543 23:18:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:09.543 23:18:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:09.543 mke2fs 1.47.0 (5-Feb-2023) 00:13:09.543 Discarding device blocks: 0/4096 done 00:13:09.543 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:09.543 00:13:09.543 Allocating group tables: 0/1 done 00:13:09.543 Writing inode tables: 0/1 done 00:13:09.543 Creating journal (1024 blocks): done 00:13:09.543 Writing superblocks and filesystem accounting information: 0/1 done 00:13:09.543 00:13:09.543 23:18:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:09.543 23:18:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:09.543 23:18:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:09.543 23:18:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:09.543 23:18:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:09.543 23:18:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:09.543 23:18:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:09.801 23:18:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:09.801 23:18:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:09.801 23:18:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:09.801 23:18:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:09.801 23:18:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:09.801 23:18:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:09.801 23:18:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:09.801 23:18:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:09.801 23:18:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 80947 00:13:09.801 23:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 80947 ']' 00:13:09.801 23:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 80947 00:13:09.801 23:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:13:09.801 23:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:09.801 23:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80947 00:13:09.801 23:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:09.801 23:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:09.801 killing process with pid 80947 00:13:09.801 23:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80947' 00:13:09.801 23:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 80947 00:13:09.801 23:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 80947 00:13:09.801 23:18:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:09.801 00:13:09.801 real 0m9.955s 00:13:09.801 user 0m13.630s 00:13:09.801 sys 0m3.560s 00:13:09.801 23:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.801 23:18:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:09.801 ************************************ 00:13:09.801 END TEST bdev_nbd 00:13:09.801 ************************************ 00:13:10.062 23:18:33 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:10.062 23:18:33 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:13:10.062 23:18:33 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:13:10.062 23:18:33 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:13:10.062 23:18:33 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:10.062 23:18:33 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:10.062 23:18:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:10.062 ************************************ 00:13:10.062 START TEST bdev_fio 00:13:10.062 ************************************ 00:13:10.062 23:18:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:13:10.062 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:10.062 23:18:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:13:10.062 23:18:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:10.062 23:18:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:10.062 23:18:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:13:10.062 23:18:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:13:10.062 23:18:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:13:10.062 23:18:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:10.062 23:18:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:10.062 23:18:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:13:10.062 23:18:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:13:10.062 23:18:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:13:10.062 23:18:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:13:10.062 23:18:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:10.062 23:18:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:13:10.062 23:18:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:13:10.062 23:18:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:10.062 23:18:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:13:10.062 23:18:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:13:10.062 23:18:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:13:10.062 23:18:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:13:10.062 23:18:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:13:10.062 23:18:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:10.062 23:18:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:13:10.062 23:18:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:10.062 23:18:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:13:10.062 23:18:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:13:10.062 23:18:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:10.062 23:18:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:13:10.062 23:18:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:13:10.062 23:18:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:10.062 23:18:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:13:10.062 23:18:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:13:10.063 23:18:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:10.063 23:18:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:13:10.063 23:18:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:13:10.063 23:18:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:10.063 23:18:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:13:10.063 23:18:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:13:10.063 23:18:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:10.063 23:18:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:13:10.063 23:18:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:13:10.063 23:18:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:10.063 23:18:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:10.063 23:18:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:13:10.063 23:18:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:10.063 23:18:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:10.063 ************************************ 00:13:10.063 START TEST bdev_fio_rw_verify 00:13:10.063 ************************************ 00:13:10.063 23:18:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:10.063 23:18:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:10.063 23:18:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:10.063 23:18:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:10.063 23:18:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:10.063 23:18:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:10.063 23:18:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:13:10.063 23:18:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:10.063 23:18:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:10.063 23:18:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:10.063 23:18:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:13:10.063 23:18:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:10.063 23:18:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:10.063 23:18:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:10.063 23:18:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:13:10.063 23:18:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:10.063 23:18:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:10.322 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:10.322 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:10.322 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:10.322 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:10.322 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:10.322 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:10.322 fio-3.35 00:13:10.322 Starting 6 threads 00:13:22.653 00:13:22.653 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=81338: Sun Nov 17 23:18:44 2024 00:13:22.653 read: IOPS=21.7k, BW=84.8MiB/s (89.0MB/s)(849MiB/10002msec) 00:13:22.653 slat (usec): min=2, max=2553, avg= 5.30, stdev=12.96 00:13:22.653 clat (usec): min=74, max=188616, avg=878.40, stdev=952.73 00:13:22.653 lat (usec): min=77, max=188628, avg=883.71, stdev=953.34 00:13:22.653 clat percentiles (usec): 00:13:22.653 | 50.000th=[ 578], 99.000th=[ 3523], 99.900th=[ 5014], 00:13:22.653 | 99.990th=[ 6521], 99.999th=[127402] 00:13:22.653 write: IOPS=22.0k, BW=85.8MiB/s (90.0MB/s)(858MiB/10002msec); 0 zone resets 00:13:22.653 slat (usec): min=10, max=3968, avg=30.48, stdev=107.33 00:13:22.653 clat (usec): min=71, max=199824, avg=1047.58, stdev=2669.54 00:13:22.653 lat (usec): min=97, max=199896, avg=1078.06, stdev=2674.38 00:13:22.653 clat percentiles (usec): 00:13:22.653 | 50.000th=[ 676], 99.000th=[ 3949], 99.900th=[ 5473], 00:13:22.653 | 99.990th=[196084], 99.999th=[200279] 00:13:22.653 bw ( KiB/s): min=44065, max=186112, per=100.00%, avg=88998.42, stdev=8065.51, samples=114 00:13:22.653 iops : min=11014, max=46528, avg=22248.84, stdev=2016.43, samples=114 00:13:22.653 lat (usec) : 100=0.05%, 250=10.02%, 500=28.08%, 750=19.56%, 1000=9.68% 00:13:22.653 lat (msec) : 2=21.31%, 4=10.61%, 10=0.69%, 250=0.01% 00:13:22.653 cpu : usr=47.44%, sys=30.78%, ctx=7072, majf=0, minf=21450 00:13:22.653 IO depths : 1=11.9%, 2=24.4%, 4=50.6%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:22.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:22.653 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:22.653 issued rwts: total=217249,219696,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:22.653 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:22.653 00:13:22.653 Run status group 0 (all jobs): 00:13:22.653 READ: bw=84.8MiB/s (89.0MB/s), 84.8MiB/s-84.8MiB/s (89.0MB/s-89.0MB/s), io=849MiB (890MB), run=10002-10002msec 00:13:22.653 WRITE: bw=85.8MiB/s (90.0MB/s), 85.8MiB/s-85.8MiB/s (90.0MB/s-90.0MB/s), io=858MiB (900MB), run=10002-10002msec 00:13:22.653 ----------------------------------------------------- 00:13:22.653 Suppressions used: 00:13:22.653 count bytes template 00:13:22.653 6 48 /usr/src/fio/parse.c 00:13:22.653 2293 220128 /usr/src/fio/iolog.c 00:13:22.653 1 8 libtcmalloc_minimal.so 00:13:22.653 1 904 libcrypto.so 00:13:22.653 ----------------------------------------------------- 00:13:22.653 00:13:22.653 00:13:22.653 real 0m11.070s 00:13:22.653 user 0m29.112s 00:13:22.653 sys 0m18.757s 00:13:22.653 23:18:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:22.653 ************************************ 00:13:22.653 END TEST bdev_fio_rw_verify 00:13:22.653 ************************************ 00:13:22.653 23:18:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:22.653 23:18:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:22.653 23:18:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:22.653 23:18:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:22.653 23:18:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:22.653 23:18:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:13:22.653 23:18:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:13:22.653 23:18:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:13:22.653 23:18:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:13:22.653 23:18:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:22.653 23:18:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:13:22.653 23:18:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:13:22.653 23:18:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:22.653 23:18:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:13:22.653 23:18:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:13:22.653 23:18:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:13:22.653 23:18:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:13:22.653 23:18:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:22.654 23:18:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "f1ed5be0-596d-4a32-9bd6-d3d3f2de16d0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "f1ed5be0-596d-4a32-9bd6-d3d3f2de16d0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "ede3cdda-f159-4960-a85c-0c9d8ddf0efd"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "ede3cdda-f159-4960-a85c-0c9d8ddf0efd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "7c4eaa10-31fd-401c-90ff-eceb38f5450f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7c4eaa10-31fd-401c-90ff-eceb38f5450f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "4b688e4d-66da-4eb3-84ae-969bdd9b2ac4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4b688e4d-66da-4eb3-84ae-969bdd9b2ac4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "ab8e5bbf-325f-4857-a561-627001b48b4c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ab8e5bbf-325f-4857-a561-627001b48b4c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "f427c0b0-a041-4521-b48e-ec668166149c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "f427c0b0-a041-4521-b48e-ec668166149c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:22.654 23:18:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:22.654 23:18:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:22.654 /home/vagrant/spdk_repo/spdk 00:13:22.654 ************************************ 00:13:22.654 END TEST bdev_fio 00:13:22.654 ************************************ 00:13:22.654 23:18:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:22.654 23:18:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:22.654 23:18:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:22.654 00:13:22.654 real 0m11.245s 00:13:22.654 user 0m29.192s 00:13:22.654 sys 0m18.835s 00:13:22.654 23:18:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:22.654 23:18:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:22.654 23:18:44 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:22.654 23:18:44 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:22.654 23:18:44 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:13:22.654 23:18:44 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:22.654 23:18:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:22.654 ************************************ 00:13:22.654 START TEST bdev_verify 00:13:22.654 ************************************ 00:13:22.654 23:18:44 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:22.654 [2024-11-17 23:18:45.038408] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:13:22.654 [2024-11-17 23:18:45.038553] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81507 ] 00:13:22.654 [2024-11-17 23:18:45.187194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:22.654 [2024-11-17 23:18:45.230427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.654 [2024-11-17 23:18:45.230471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.654 Running I/O for 5 seconds... 00:13:24.283 23520.00 IOPS, 91.88 MiB/s [2024-11-17T23:18:49.053Z] 24192.00 IOPS, 94.50 MiB/s [2024-11-17T23:18:50.000Z] 23989.33 IOPS, 93.71 MiB/s [2024-11-17T23:18:50.936Z] 23728.00 IOPS, 92.69 MiB/s [2024-11-17T23:18:50.936Z] 23532.80 IOPS, 91.92 MiB/s 00:13:27.115 Latency(us) 00:13:27.115 [2024-11-17T23:18:50.936Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:27.115 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:27.115 Verification LBA range: start 0x0 length 0xa0000 00:13:27.115 nvme0n1 : 5.03 1629.78 6.37 0.00 0.00 78396.29 9023.80 416204.01 00:13:27.115 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:27.115 Verification LBA range: start 0xa0000 length 0xa0000 00:13:27.115 nvme0n1 : 5.02 1427.02 5.57 0.00 0.00 89532.65 7360.20 442015.11 00:13:27.115 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:27.115 Verification LBA range: start 0x0 length 0xbd0bd 00:13:27.115 nvme1n1 : 5.03 2518.02 9.84 0.00 0.00 50588.59 4486.70 62914.56 00:13:27.115 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:27.115 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:27.115 nvme1n1 : 5.04 2471.75 9.66 0.00 0.00 51551.08 5747.00 56461.78 00:13:27.115 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:27.115 Verification LBA range: start 0x0 length 0x80000 00:13:27.115 nvme2n1 : 5.06 1973.43 7.71 0.00 0.00 64431.13 5671.38 72593.72 00:13:27.115 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:27.115 Verification LBA range: start 0x80000 length 0x80000 00:13:27.115 nvme2n1 : 5.07 1891.78 7.39 0.00 0.00 67024.49 6856.07 58478.28 00:13:27.115 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:27.115 Verification LBA range: start 0x0 length 0x80000 00:13:27.115 nvme2n2 : 5.04 1931.31 7.54 0.00 0.00 65762.21 12451.84 62511.26 00:13:27.115 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:27.115 Verification LBA range: start 0x80000 length 0x80000 00:13:27.115 nvme2n2 : 5.08 1866.02 7.29 0.00 0.00 67810.73 6956.90 62107.96 00:13:27.115 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:27.115 Verification LBA range: start 0x0 length 0x80000 00:13:27.115 nvme2n3 : 5.07 1945.39 7.60 0.00 0.00 65147.20 3831.34 62511.26 00:13:27.115 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:27.115 Verification LBA range: start 0x80000 length 0x80000 00:13:27.115 nvme2n3 : 5.08 1865.48 7.29 0.00 0.00 67738.74 6906.49 70577.23 00:13:27.115 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:27.115 Verification LBA range: start 0x0 length 0x20000 00:13:27.115 nvme3n1 : 5.07 1944.78 7.60 0.00 0.00 65047.06 4688.34 68964.04 00:13:27.115 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:27.115 Verification LBA range: start 0x20000 length 0x20000 00:13:27.115 nvme3n1 : 5.07 1842.50 7.20 0.00 0.00 68511.33 4864.79 65334.35 00:13:27.115 [2024-11-17T23:18:50.936Z] =================================================================================================================== 00:13:27.115 [2024-11-17T23:18:50.936Z] Total : 23307.25 91.04 0.00 0.00 65373.69 3831.34 442015.11 00:13:27.115 00:13:27.115 real 0m5.913s 00:13:27.115 user 0m9.528s 00:13:27.115 sys 0m1.395s 00:13:27.115 23:18:50 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:27.115 23:18:50 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:27.115 ************************************ 00:13:27.115 END TEST bdev_verify 00:13:27.116 ************************************ 00:13:27.116 23:18:50 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:27.116 23:18:50 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:13:27.116 23:18:50 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:27.116 23:18:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:27.375 ************************************ 00:13:27.375 START TEST bdev_verify_big_io 00:13:27.375 ************************************ 00:13:27.376 23:18:50 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:27.376 [2024-11-17 23:18:51.010812] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:13:27.376 [2024-11-17 23:18:51.010959] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81594 ] 00:13:27.376 [2024-11-17 23:18:51.159123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:27.635 [2024-11-17 23:18:51.202863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.635 [2024-11-17 23:18:51.202961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:27.896 Running I/O for 5 seconds... 00:13:33.742 1688.00 IOPS, 105.50 MiB/s [2024-11-17T23:18:57.563Z] 2667.00 IOPS, 166.69 MiB/s [2024-11-17T23:18:57.823Z] 2902.00 IOPS, 181.38 MiB/s 00:13:34.002 Latency(us) 00:13:34.002 [2024-11-17T23:18:57.823Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:34.002 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:34.002 Verification LBA range: start 0x0 length 0xa000 00:13:34.002 nvme0n1 : 5.97 140.60 8.79 0.00 0.00 889361.07 81062.99 1303460.63 00:13:34.002 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:34.002 Verification LBA range: start 0xa000 length 0xa000 00:13:34.002 nvme0n1 : 5.99 108.16 6.76 0.00 0.00 1152426.33 12149.37 1200216.22 00:13:34.002 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:34.002 Verification LBA range: start 0x0 length 0xbd0b 00:13:34.002 nvme1n1 : 5.94 193.81 12.11 0.00 0.00 615662.58 18450.90 858219.13 00:13:34.002 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:34.002 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:34.002 nvme1n1 : 5.99 106.77 6.67 0.00 0.00 1136313.90 95581.74 2155226.98 00:13:34.002 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:34.002 Verification LBA range: start 0x0 length 0x8000 00:13:34.002 nvme2n1 : 5.88 153.73 9.61 0.00 0.00 753099.65 36901.81 858219.13 00:13:34.002 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:34.002 Verification LBA range: start 0x8000 length 0x8000 00:13:34.002 nvme2n1 : 6.00 128.03 8.00 0.00 0.00 916828.42 87919.06 877577.45 00:13:34.002 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:34.002 Verification LBA range: start 0x0 length 0x8000 00:13:34.002 nvme2n2 : 5.96 104.64 6.54 0.00 0.00 1088425.48 17745.13 2426243.54 00:13:34.002 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:34.002 Verification LBA range: start 0x8000 length 0x8000 00:13:34.002 nvme2n2 : 6.00 61.31 3.83 0.00 0.00 1859864.56 93565.24 3561932.01 00:13:34.002 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:34.002 Verification LBA range: start 0x0 length 0x8000 00:13:34.002 nvme2n3 : 5.96 158.87 9.93 0.00 0.00 692485.60 57268.38 1213121.77 00:13:34.002 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:34.002 Verification LBA range: start 0x8000 length 0x8000 00:13:34.002 nvme2n3 : 6.01 135.80 8.49 0.00 0.00 818302.46 19459.15 1509949.44 00:13:34.002 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:34.002 Verification LBA range: start 0x0 length 0x2000 00:13:34.002 nvme3n1 : 5.97 147.50 9.22 0.00 0.00 725462.30 12502.25 1451874.46 00:13:34.002 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:34.002 Verification LBA range: start 0x2000 length 0x2000 00:13:34.002 nvme3n1 : 6.02 125.01 7.81 0.00 0.00 860635.90 8015.56 1742249.35 00:13:34.002 [2024-11-17T23:18:57.823Z] =================================================================================================================== 00:13:34.002 [2024-11-17T23:18:57.823Z] Total : 1564.23 97.76 0.00 0.00 887458.56 8015.56 3561932.01 00:13:34.263 00:13:34.263 real 0m6.940s 00:13:34.263 user 0m12.630s 00:13:34.263 sys 0m0.528s 00:13:34.263 23:18:57 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:34.263 23:18:57 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.263 ************************************ 00:13:34.263 END TEST bdev_verify_big_io 00:13:34.263 ************************************ 00:13:34.263 23:18:57 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:34.263 23:18:57 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:34.263 23:18:57 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:34.263 23:18:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:34.263 ************************************ 00:13:34.263 START TEST bdev_write_zeroes 00:13:34.263 ************************************ 00:13:34.263 23:18:57 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:34.263 [2024-11-17 23:18:58.027600] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:13:34.263 [2024-11-17 23:18:58.027750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81693 ] 00:13:34.524 [2024-11-17 23:18:58.174870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.524 [2024-11-17 23:18:58.213820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.784 Running I/O for 1 seconds... 00:13:35.980 97952.00 IOPS, 382.62 MiB/s 00:13:35.980 Latency(us) 00:13:35.980 [2024-11-17T23:18:59.801Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:35.980 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:35.980 nvme0n1 : 1.03 15712.16 61.38 0.00 0.00 8137.29 5293.29 28432.54 00:13:35.980 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:35.981 nvme1n1 : 1.03 17653.47 68.96 0.00 0.00 7234.80 3327.21 15627.82 00:13:35.981 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:35.981 nvme2n1 : 1.03 15619.25 61.01 0.00 0.00 8151.50 5520.15 28230.89 00:13:35.981 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:35.981 nvme2n2 : 1.03 15601.68 60.94 0.00 0.00 8123.21 5318.50 28230.89 00:13:35.981 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:35.981 nvme2n3 : 1.03 15583.57 60.87 0.00 0.00 8124.28 5242.88 28230.89 00:13:35.981 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:35.981 nvme3n1 : 1.04 15565.49 60.80 0.00 0.00 8125.19 5167.26 28230.89 00:13:35.981 [2024-11-17T23:18:59.802Z] =================================================================================================================== 00:13:35.981 [2024-11-17T23:18:59.802Z] Total : 95735.62 373.97 0.00 0.00 7966.97 3327.21 28432.54 00:13:35.981 00:13:35.981 real 0m1.773s 00:13:35.981 user 0m1.093s 00:13:35.981 sys 0m0.496s 00:13:35.981 23:18:59 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:35.981 23:18:59 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:35.981 ************************************ 00:13:35.981 END TEST bdev_write_zeroes 00:13:35.981 ************************************ 00:13:35.981 23:18:59 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:35.981 23:18:59 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:35.981 23:18:59 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:35.981 23:18:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:35.981 ************************************ 00:13:35.981 START TEST bdev_json_nonenclosed 00:13:35.981 ************************************ 00:13:35.981 23:18:59 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:36.239 [2024-11-17 23:18:59.858246] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:13:36.239 [2024-11-17 23:18:59.858359] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81735 ] 00:13:36.239 [2024-11-17 23:19:00.002445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.239 [2024-11-17 23:19:00.027593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.240 [2024-11-17 23:19:00.027699] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:36.240 [2024-11-17 23:19:00.027718] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:36.240 [2024-11-17 23:19:00.027729] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:36.498 00:13:36.498 real 0m0.300s 00:13:36.498 user 0m0.121s 00:13:36.498 sys 0m0.075s 00:13:36.498 23:19:00 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:36.498 23:19:00 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:36.498 ************************************ 00:13:36.498 END TEST bdev_json_nonenclosed 00:13:36.498 ************************************ 00:13:36.498 23:19:00 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:36.498 23:19:00 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:36.498 23:19:00 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:36.498 23:19:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:36.498 ************************************ 00:13:36.498 START TEST bdev_json_nonarray 00:13:36.498 ************************************ 00:13:36.498 23:19:00 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:36.498 [2024-11-17 23:19:00.201701] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:13:36.498 [2024-11-17 23:19:00.201992] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81755 ] 00:13:36.759 [2024-11-17 23:19:00.349969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.759 [2024-11-17 23:19:00.373819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.759 [2024-11-17 23:19:00.373938] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:36.759 [2024-11-17 23:19:00.373954] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:36.759 [2024-11-17 23:19:00.373967] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:36.759 00:13:36.759 real 0m0.298s 00:13:36.759 user 0m0.113s 00:13:36.759 sys 0m0.083s 00:13:36.759 23:19:00 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:36.759 ************************************ 00:13:36.759 END TEST bdev_json_nonarray 00:13:36.759 ************************************ 00:13:36.759 23:19:00 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:36.759 23:19:00 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:13:36.759 23:19:00 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:13:36.759 23:19:00 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:13:36.759 23:19:00 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:36.759 23:19:00 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:13:36.759 23:19:00 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:36.759 23:19:00 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:36.759 23:19:00 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:13:36.759 23:19:00 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:13:36.759 23:19:00 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:13:36.759 23:19:00 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:13:36.759 23:19:00 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:37.328 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:47.369 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:47.369 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:49.276 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:49.276 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:49.536 00:13:49.536 real 0m58.949s 00:13:49.536 user 1m18.574s 00:13:49.536 sys 0m45.467s 00:13:49.536 23:19:13 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:49.536 ************************************ 00:13:49.536 END TEST blockdev_xnvme 00:13:49.536 ************************************ 00:13:49.536 23:19:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:49.536 23:19:13 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:49.536 23:19:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:49.536 23:19:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:49.536 23:19:13 -- common/autotest_common.sh@10 -- # set +x 00:13:49.536 ************************************ 00:13:49.536 START TEST ublk 00:13:49.536 ************************************ 00:13:49.536 23:19:13 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:49.536 * Looking for test storage... 00:13:49.536 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:49.536 23:19:13 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:49.536 23:19:13 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:49.536 23:19:13 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:13:49.536 23:19:13 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:49.536 23:19:13 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:49.536 23:19:13 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:49.536 23:19:13 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:49.536 23:19:13 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:13:49.536 23:19:13 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:13:49.536 23:19:13 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:13:49.536 23:19:13 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:13:49.536 23:19:13 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:13:49.536 23:19:13 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:13:49.536 23:19:13 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:13:49.536 23:19:13 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:49.536 23:19:13 ublk -- scripts/common.sh@344 -- # case "$op" in 00:13:49.536 23:19:13 ublk -- scripts/common.sh@345 -- # : 1 00:13:49.536 23:19:13 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:49.536 23:19:13 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:49.536 23:19:13 ublk -- scripts/common.sh@365 -- # decimal 1 00:13:49.536 23:19:13 ublk -- scripts/common.sh@353 -- # local d=1 00:13:49.536 23:19:13 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:49.536 23:19:13 ublk -- scripts/common.sh@355 -- # echo 1 00:13:49.536 23:19:13 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:13:49.536 23:19:13 ublk -- scripts/common.sh@366 -- # decimal 2 00:13:49.536 23:19:13 ublk -- scripts/common.sh@353 -- # local d=2 00:13:49.536 23:19:13 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:49.536 23:19:13 ublk -- scripts/common.sh@355 -- # echo 2 00:13:49.536 23:19:13 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:13:49.536 23:19:13 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:49.536 23:19:13 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:49.536 23:19:13 ublk -- scripts/common.sh@368 -- # return 0 00:13:49.536 23:19:13 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:49.536 23:19:13 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:49.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.536 --rc genhtml_branch_coverage=1 00:13:49.536 --rc genhtml_function_coverage=1 00:13:49.536 --rc genhtml_legend=1 00:13:49.536 --rc geninfo_all_blocks=1 00:13:49.536 --rc geninfo_unexecuted_blocks=1 00:13:49.536 00:13:49.536 ' 00:13:49.536 23:19:13 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:49.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.536 --rc genhtml_branch_coverage=1 00:13:49.536 --rc genhtml_function_coverage=1 00:13:49.536 --rc genhtml_legend=1 00:13:49.536 --rc geninfo_all_blocks=1 00:13:49.536 --rc geninfo_unexecuted_blocks=1 00:13:49.536 00:13:49.536 ' 00:13:49.536 23:19:13 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:49.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.536 --rc genhtml_branch_coverage=1 00:13:49.536 --rc genhtml_function_coverage=1 00:13:49.536 --rc genhtml_legend=1 00:13:49.536 --rc geninfo_all_blocks=1 00:13:49.536 --rc geninfo_unexecuted_blocks=1 00:13:49.536 00:13:49.536 ' 00:13:49.536 23:19:13 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:49.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.536 --rc genhtml_branch_coverage=1 00:13:49.536 --rc genhtml_function_coverage=1 00:13:49.536 --rc genhtml_legend=1 00:13:49.536 --rc geninfo_all_blocks=1 00:13:49.536 --rc geninfo_unexecuted_blocks=1 00:13:49.536 00:13:49.536 ' 00:13:49.536 23:19:13 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:49.536 23:19:13 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:49.536 23:19:13 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:49.536 23:19:13 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:49.536 23:19:13 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:49.536 23:19:13 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:49.537 23:19:13 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:49.537 23:19:13 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:49.537 23:19:13 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:49.537 23:19:13 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:49.537 23:19:13 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:49.537 23:19:13 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:49.537 23:19:13 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:49.537 23:19:13 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:49.537 23:19:13 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:49.537 23:19:13 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:49.537 23:19:13 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:49.537 23:19:13 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:49.537 23:19:13 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:49.537 23:19:13 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:49.537 23:19:13 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:49.537 23:19:13 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:49.537 23:19:13 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:49.537 ************************************ 00:13:49.537 START TEST test_save_ublk_config 00:13:49.537 ************************************ 00:13:49.537 23:19:13 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:13:49.537 23:19:13 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:49.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.537 23:19:13 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=82051 00:13:49.537 23:19:13 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:49.537 23:19:13 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:49.537 23:19:13 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 82051 00:13:49.537 23:19:13 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 82051 ']' 00:13:49.537 23:19:13 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.537 23:19:13 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:49.537 23:19:13 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.537 23:19:13 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:49.537 23:19:13 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:49.796 [2024-11-17 23:19:13.388610] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:13:49.796 [2024-11-17 23:19:13.388859] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82051 ] 00:13:49.796 [2024-11-17 23:19:13.536410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.796 [2024-11-17 23:19:13.560709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.731 23:19:14 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:50.731 23:19:14 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:13:50.731 23:19:14 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:13:50.731 23:19:14 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:13:50.731 23:19:14 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.731 23:19:14 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:50.731 [2024-11-17 23:19:14.224901] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:50.731 [2024-11-17 23:19:14.225585] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:50.731 malloc0 00:13:50.731 [2024-11-17 23:19:14.257001] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:50.731 [2024-11-17 23:19:14.257068] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:50.731 [2024-11-17 23:19:14.257079] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:50.731 [2024-11-17 23:19:14.257094] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:50.731 [2024-11-17 23:19:14.265979] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:50.731 [2024-11-17 23:19:14.266006] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:50.731 [2024-11-17 23:19:14.272903] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:50.731 [2024-11-17 23:19:14.273003] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:50.731 [2024-11-17 23:19:14.289899] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:50.731 0 00:13:50.731 23:19:14 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.731 23:19:14 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:13:50.732 23:19:14 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.732 23:19:14 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:50.990 23:19:14 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.990 23:19:14 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:13:50.990 "subsystems": [ 00:13:50.990 { 00:13:50.990 "subsystem": "fsdev", 00:13:50.990 "config": [ 00:13:50.990 { 00:13:50.990 "method": "fsdev_set_opts", 00:13:50.990 "params": { 00:13:50.990 "fsdev_io_pool_size": 65535, 00:13:50.990 "fsdev_io_cache_size": 256 00:13:50.990 } 00:13:50.990 } 00:13:50.990 ] 00:13:50.990 }, 00:13:50.990 { 00:13:50.990 "subsystem": "keyring", 00:13:50.990 "config": [] 00:13:50.990 }, 00:13:50.990 { 00:13:50.990 "subsystem": "iobuf", 00:13:50.990 "config": [ 00:13:50.990 { 00:13:50.990 "method": "iobuf_set_options", 00:13:50.990 "params": { 00:13:50.990 "small_pool_count": 8192, 00:13:50.990 "large_pool_count": 1024, 00:13:50.990 "small_bufsize": 8192, 00:13:50.990 "large_bufsize": 135168, 00:13:50.990 "enable_numa": false 00:13:50.990 } 00:13:50.990 } 00:13:50.990 ] 00:13:50.990 }, 00:13:50.990 { 00:13:50.990 "subsystem": "sock", 00:13:50.990 "config": [ 00:13:50.990 { 00:13:50.990 "method": "sock_set_default_impl", 00:13:50.990 "params": { 00:13:50.990 "impl_name": "posix" 00:13:50.990 } 00:13:50.990 }, 00:13:50.990 { 00:13:50.990 "method": "sock_impl_set_options", 00:13:50.990 "params": { 00:13:50.990 "impl_name": "ssl", 00:13:50.990 "recv_buf_size": 4096, 00:13:50.990 "send_buf_size": 4096, 00:13:50.990 "enable_recv_pipe": true, 00:13:50.990 "enable_quickack": false, 00:13:50.990 "enable_placement_id": 0, 00:13:50.990 "enable_zerocopy_send_server": true, 00:13:50.990 "enable_zerocopy_send_client": false, 00:13:50.990 "zerocopy_threshold": 0, 00:13:50.990 "tls_version": 0, 00:13:50.990 "enable_ktls": false 00:13:50.990 } 00:13:50.990 }, 00:13:50.990 { 00:13:50.990 "method": "sock_impl_set_options", 00:13:50.990 "params": { 00:13:50.990 "impl_name": "posix", 00:13:50.990 "recv_buf_size": 2097152, 00:13:50.990 "send_buf_size": 2097152, 00:13:50.990 "enable_recv_pipe": true, 00:13:50.990 "enable_quickack": false, 00:13:50.990 "enable_placement_id": 0, 00:13:50.990 "enable_zerocopy_send_server": true, 00:13:50.990 "enable_zerocopy_send_client": false, 00:13:50.990 "zerocopy_threshold": 0, 00:13:50.990 "tls_version": 0, 00:13:50.990 "enable_ktls": false 00:13:50.990 } 00:13:50.990 } 00:13:50.990 ] 00:13:50.990 }, 00:13:50.990 { 00:13:50.990 "subsystem": "vmd", 00:13:50.990 "config": [] 00:13:50.990 }, 00:13:50.990 { 00:13:50.990 "subsystem": "accel", 00:13:50.990 "config": [ 00:13:50.990 { 00:13:50.990 "method": "accel_set_options", 00:13:50.990 "params": { 00:13:50.990 "small_cache_size": 128, 00:13:50.990 "large_cache_size": 16, 00:13:50.990 "task_count": 2048, 00:13:50.990 "sequence_count": 2048, 00:13:50.990 "buf_count": 2048 00:13:50.990 } 00:13:50.990 } 00:13:50.990 ] 00:13:50.990 }, 00:13:50.990 { 00:13:50.990 "subsystem": "bdev", 00:13:50.990 "config": [ 00:13:50.990 { 00:13:50.990 "method": "bdev_set_options", 00:13:50.990 "params": { 00:13:50.991 "bdev_io_pool_size": 65535, 00:13:50.991 "bdev_io_cache_size": 256, 00:13:50.991 "bdev_auto_examine": true, 00:13:50.991 "iobuf_small_cache_size": 128, 00:13:50.991 "iobuf_large_cache_size": 16 00:13:50.991 } 00:13:50.991 }, 00:13:50.991 { 00:13:50.991 "method": "bdev_raid_set_options", 00:13:50.991 "params": { 00:13:50.991 "process_window_size_kb": 1024, 00:13:50.991 "process_max_bandwidth_mb_sec": 0 00:13:50.991 } 00:13:50.991 }, 00:13:50.991 { 00:13:50.991 "method": "bdev_iscsi_set_options", 00:13:50.991 "params": { 00:13:50.991 "timeout_sec": 30 00:13:50.991 } 00:13:50.991 }, 00:13:50.991 { 00:13:50.991 "method": "bdev_nvme_set_options", 00:13:50.991 "params": { 00:13:50.991 "action_on_timeout": "none", 00:13:50.991 "timeout_us": 0, 00:13:50.991 "timeout_admin_us": 0, 00:13:50.991 "keep_alive_timeout_ms": 10000, 00:13:50.991 "arbitration_burst": 0, 00:13:50.991 "low_priority_weight": 0, 00:13:50.991 "medium_priority_weight": 0, 00:13:50.991 "high_priority_weight": 0, 00:13:50.991 "nvme_adminq_poll_period_us": 10000, 00:13:50.991 "nvme_ioq_poll_period_us": 0, 00:13:50.991 "io_queue_requests": 0, 00:13:50.991 "delay_cmd_submit": true, 00:13:50.991 "transport_retry_count": 4, 00:13:50.991 "bdev_retry_count": 3, 00:13:50.991 "transport_ack_timeout": 0, 00:13:50.991 "ctrlr_loss_timeout_sec": 0, 00:13:50.991 "reconnect_delay_sec": 0, 00:13:50.991 "fast_io_fail_timeout_sec": 0, 00:13:50.991 "disable_auto_failback": false, 00:13:50.991 "generate_uuids": false, 00:13:50.991 "transport_tos": 0, 00:13:50.991 "nvme_error_stat": false, 00:13:50.991 "rdma_srq_size": 0, 00:13:50.991 "io_path_stat": false, 00:13:50.991 "allow_accel_sequence": false, 00:13:50.991 "rdma_max_cq_size": 0, 00:13:50.991 "rdma_cm_event_timeout_ms": 0, 00:13:50.991 "dhchap_digests": [ 00:13:50.991 "sha256", 00:13:50.991 "sha384", 00:13:50.991 "sha512" 00:13:50.991 ], 00:13:50.991 "dhchap_dhgroups": [ 00:13:50.991 "null", 00:13:50.991 "ffdhe2048", 00:13:50.991 "ffdhe3072", 00:13:50.991 "ffdhe4096", 00:13:50.991 "ffdhe6144", 00:13:50.991 "ffdhe8192" 00:13:50.991 ] 00:13:50.991 } 00:13:50.991 }, 00:13:50.991 { 00:13:50.991 "method": "bdev_nvme_set_hotplug", 00:13:50.991 "params": { 00:13:50.991 "period_us": 100000, 00:13:50.991 "enable": false 00:13:50.991 } 00:13:50.991 }, 00:13:50.991 { 00:13:50.991 "method": "bdev_malloc_create", 00:13:50.991 "params": { 00:13:50.991 "name": "malloc0", 00:13:50.991 "num_blocks": 8192, 00:13:50.991 "block_size": 4096, 00:13:50.991 "physical_block_size": 4096, 00:13:50.991 "uuid": "e5a41ced-b451-4035-9871-74e8e4fbe006", 00:13:50.991 "optimal_io_boundary": 0, 00:13:50.991 "md_size": 0, 00:13:50.991 "dif_type": 0, 00:13:50.991 "dif_is_head_of_md": false, 00:13:50.991 "dif_pi_format": 0 00:13:50.991 } 00:13:50.991 }, 00:13:50.991 { 00:13:50.991 "method": "bdev_wait_for_examine" 00:13:50.991 } 00:13:50.991 ] 00:13:50.991 }, 00:13:50.991 { 00:13:50.991 "subsystem": "scsi", 00:13:50.991 "config": null 00:13:50.991 }, 00:13:50.991 { 00:13:50.991 "subsystem": "scheduler", 00:13:50.991 "config": [ 00:13:50.991 { 00:13:50.991 "method": "framework_set_scheduler", 00:13:50.991 "params": { 00:13:50.991 "name": "static" 00:13:50.991 } 00:13:50.991 } 00:13:50.991 ] 00:13:50.991 }, 00:13:50.991 { 00:13:50.991 "subsystem": "vhost_scsi", 00:13:50.991 "config": [] 00:13:50.991 }, 00:13:50.991 { 00:13:50.991 "subsystem": "vhost_blk", 00:13:50.991 "config": [] 00:13:50.991 }, 00:13:50.991 { 00:13:50.991 "subsystem": "ublk", 00:13:50.991 "config": [ 00:13:50.991 { 00:13:50.991 "method": "ublk_create_target", 00:13:50.991 "params": { 00:13:50.991 "cpumask": "1" 00:13:50.991 } 00:13:50.991 }, 00:13:50.991 { 00:13:50.991 "method": "ublk_start_disk", 00:13:50.991 "params": { 00:13:50.991 "bdev_name": "malloc0", 00:13:50.991 "ublk_id": 0, 00:13:50.991 "num_queues": 1, 00:13:50.991 "queue_depth": 128 00:13:50.991 } 00:13:50.991 } 00:13:50.991 ] 00:13:50.991 }, 00:13:50.991 { 00:13:50.991 "subsystem": "nbd", 00:13:50.991 "config": [] 00:13:50.991 }, 00:13:50.991 { 00:13:50.991 "subsystem": "nvmf", 00:13:50.991 "config": [ 00:13:50.991 { 00:13:50.991 "method": "nvmf_set_config", 00:13:50.991 "params": { 00:13:50.991 "discovery_filter": "match_any", 00:13:50.991 "admin_cmd_passthru": { 00:13:50.991 "identify_ctrlr": false 00:13:50.991 }, 00:13:50.991 "dhchap_digests": [ 00:13:50.991 "sha256", 00:13:50.991 "sha384", 00:13:50.991 "sha512" 00:13:50.991 ], 00:13:50.991 "dhchap_dhgroups": [ 00:13:50.991 "null", 00:13:50.991 "ffdhe2048", 00:13:50.991 "ffdhe3072", 00:13:50.991 "ffdhe4096", 00:13:50.991 "ffdhe6144", 00:13:50.991 "ffdhe8192" 00:13:50.991 ] 00:13:50.991 } 00:13:50.991 }, 00:13:50.991 { 00:13:50.991 "method": "nvmf_set_max_subsystems", 00:13:50.991 "params": { 00:13:50.991 "max_subsystems": 1024 00:13:50.991 } 00:13:50.991 }, 00:13:50.991 { 00:13:50.991 "method": "nvmf_set_crdt", 00:13:50.991 "params": { 00:13:50.991 "crdt1": 0, 00:13:50.991 "crdt2": 0, 00:13:50.991 "crdt3": 0 00:13:50.991 } 00:13:50.991 } 00:13:50.991 ] 00:13:50.991 }, 00:13:50.991 { 00:13:50.991 "subsystem": "iscsi", 00:13:50.991 "config": [ 00:13:50.991 { 00:13:50.991 "method": "iscsi_set_options", 00:13:50.991 "params": { 00:13:50.991 "node_base": "iqn.2016-06.io.spdk", 00:13:50.991 "max_sessions": 128, 00:13:50.991 "max_connections_per_session": 2, 00:13:50.991 "max_queue_depth": 64, 00:13:50.991 "default_time2wait": 2, 00:13:50.991 "default_time2retain": 20, 00:13:50.991 "first_burst_length": 8192, 00:13:50.991 "immediate_data": true, 00:13:50.991 "allow_duplicated_isid": false, 00:13:50.991 "error_recovery_level": 0, 00:13:50.991 "nop_timeout": 60, 00:13:50.991 "nop_in_interval": 30, 00:13:50.991 "disable_chap": false, 00:13:50.991 "require_chap": false, 00:13:50.991 "mutual_chap": false, 00:13:50.991 "chap_group": 0, 00:13:50.991 "max_large_datain_per_connection": 64, 00:13:50.991 "max_r2t_per_connection": 4, 00:13:50.991 "pdu_pool_size": 36864, 00:13:50.991 "immediate_data_pool_size": 16384, 00:13:50.991 "data_out_pool_size": 2048 00:13:50.991 } 00:13:50.991 } 00:13:50.991 ] 00:13:50.991 } 00:13:50.991 ] 00:13:50.991 }' 00:13:50.991 23:19:14 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 82051 00:13:50.991 23:19:14 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 82051 ']' 00:13:50.991 23:19:14 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 82051 00:13:50.991 23:19:14 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:13:50.991 23:19:14 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:50.991 23:19:14 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82051 00:13:50.991 killing process with pid 82051 00:13:50.991 23:19:14 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:50.991 23:19:14 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:50.991 23:19:14 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82051' 00:13:50.991 23:19:14 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 82051 00:13:50.991 23:19:14 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 82051 00:13:51.250 [2024-11-17 23:19:14.840361] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:51.250 [2024-11-17 23:19:14.870992] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:51.250 [2024-11-17 23:19:14.871115] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:51.250 [2024-11-17 23:19:14.877909] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:51.250 [2024-11-17 23:19:14.877964] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:51.250 [2024-11-17 23:19:14.877972] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:51.250 [2024-11-17 23:19:14.878006] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:51.250 [2024-11-17 23:19:14.878144] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:51.514 23:19:15 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=82088 00:13:51.514 23:19:15 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 82088 00:13:51.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.514 23:19:15 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 82088 ']' 00:13:51.514 23:19:15 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.514 23:19:15 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:51.514 23:19:15 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.514 23:19:15 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:51.514 23:19:15 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:51.514 23:19:15 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:13:51.514 "subsystems": [ 00:13:51.514 { 00:13:51.514 "subsystem": "fsdev", 00:13:51.514 "config": [ 00:13:51.514 { 00:13:51.514 "method": "fsdev_set_opts", 00:13:51.514 "params": { 00:13:51.514 "fsdev_io_pool_size": 65535, 00:13:51.514 "fsdev_io_cache_size": 256 00:13:51.514 } 00:13:51.514 } 00:13:51.514 ] 00:13:51.514 }, 00:13:51.514 { 00:13:51.514 "subsystem": "keyring", 00:13:51.514 "config": [] 00:13:51.514 }, 00:13:51.514 { 00:13:51.514 "subsystem": "iobuf", 00:13:51.514 "config": [ 00:13:51.514 { 00:13:51.514 "method": "iobuf_set_options", 00:13:51.514 "params": { 00:13:51.514 "small_pool_count": 8192, 00:13:51.514 "large_pool_count": 1024, 00:13:51.514 "small_bufsize": 8192, 00:13:51.514 "large_bufsize": 135168, 00:13:51.514 "enable_numa": false 00:13:51.514 } 00:13:51.514 } 00:13:51.514 ] 00:13:51.514 }, 00:13:51.514 { 00:13:51.514 "subsystem": "sock", 00:13:51.514 "config": [ 00:13:51.514 { 00:13:51.514 "method": "sock_set_default_impl", 00:13:51.514 "params": { 00:13:51.514 "impl_name": "posix" 00:13:51.514 } 00:13:51.514 }, 00:13:51.514 { 00:13:51.514 "method": "sock_impl_set_options", 00:13:51.514 "params": { 00:13:51.514 "impl_name": "ssl", 00:13:51.514 "recv_buf_size": 4096, 00:13:51.514 "send_buf_size": 4096, 00:13:51.514 "enable_recv_pipe": true, 00:13:51.514 "enable_quickack": false, 00:13:51.514 "enable_placement_id": 0, 00:13:51.514 "enable_zerocopy_send_server": true, 00:13:51.514 "enable_zerocopy_send_client": false, 00:13:51.514 "zerocopy_threshold": 0, 00:13:51.514 "tls_version": 0, 00:13:51.514 "enable_ktls": false 00:13:51.515 } 00:13:51.515 }, 00:13:51.515 { 00:13:51.515 "method": "sock_impl_set_options", 00:13:51.515 "params": { 00:13:51.515 "impl_name": "posix", 00:13:51.515 "recv_buf_size": 2097152, 00:13:51.515 "send_buf_size": 2097152, 00:13:51.515 "enable_recv_pipe": true, 00:13:51.515 "enable_quickack": false, 00:13:51.515 "enable_placement_id": 0, 00:13:51.515 "enable_zerocopy_send_server": true, 00:13:51.515 "enable_zerocopy_send_client": false, 00:13:51.515 "zerocopy_threshold": 0, 00:13:51.515 "tls_version": 0, 00:13:51.515 "enable_ktls": false 00:13:51.515 } 00:13:51.515 } 00:13:51.515 ] 00:13:51.515 }, 00:13:51.515 { 00:13:51.515 "subsystem": "vmd", 00:13:51.515 "config": [] 00:13:51.515 }, 00:13:51.515 { 00:13:51.515 "subsystem": "accel", 00:13:51.515 "config": [ 00:13:51.515 { 00:13:51.515 "method": "accel_set_options", 00:13:51.515 "params": { 00:13:51.515 "small_cache_size": 128, 00:13:51.515 "large_cache_size": 16, 00:13:51.515 "task_count": 2048, 00:13:51.515 "sequence_count": 2048, 00:13:51.515 "buf_count": 2048 00:13:51.515 } 00:13:51.515 } 00:13:51.515 ] 00:13:51.515 }, 00:13:51.515 { 00:13:51.515 "subsystem": "bdev", 00:13:51.515 "config": [ 00:13:51.515 { 00:13:51.515 "method": "bdev_set_options", 00:13:51.515 "params": { 00:13:51.515 "bdev_io_pool_size": 65535, 00:13:51.515 "bdev_io_cache_size": 256, 00:13:51.515 "bdev_auto_examine": true, 00:13:51.515 "iobuf_small_cache_size": 128, 00:13:51.515 "iobuf_large_cache_size": 16 00:13:51.515 } 00:13:51.515 }, 00:13:51.515 { 00:13:51.515 "method": "bdev_raid_set_options", 00:13:51.515 "params": { 00:13:51.515 "process_window_size_kb": 1024, 00:13:51.515 "process_max_bandwidth_mb_sec": 0 00:13:51.515 } 00:13:51.515 }, 00:13:51.515 { 00:13:51.515 "method": "bdev_iscsi_set_options", 00:13:51.515 "params": { 00:13:51.515 "timeout_sec": 30 00:13:51.515 } 00:13:51.515 }, 00:13:51.515 { 00:13:51.515 "method": "bdev_nvme_set_options", 00:13:51.515 "params": { 00:13:51.515 "action_on_timeout": "none", 00:13:51.515 "timeout_us": 0, 00:13:51.515 "timeout_admin_us": 0, 00:13:51.515 "keep_alive_timeout_ms": 10000, 00:13:51.515 "arbitration_burst": 0, 00:13:51.515 "low_priority_weight": 0, 00:13:51.515 "medium_priority_weight": 0, 00:13:51.515 "high_priority_weight": 0, 00:13:51.515 "nvme_adminq_poll_period_us": 10000, 00:13:51.515 "nvme_ioq_poll_period_us": 0, 00:13:51.515 "io_queue_requests": 0, 00:13:51.515 "delay_cmd_submit": true, 00:13:51.515 "transport_retry_count": 4, 00:13:51.515 "bdev_retry_count": 3, 00:13:51.515 "transport_ack_timeout": 0, 00:13:51.515 "ctrlr_loss_timeout_sec": 0, 00:13:51.515 "reconnect_delay_sec": 0, 00:13:51.515 "fast_io_fail_timeout_sec": 0, 00:13:51.515 "disable_auto_failback": false, 00:13:51.515 "generate_uuids": false, 00:13:51.515 "transport_tos": 0, 00:13:51.515 "nvme_error_stat": false, 00:13:51.515 "rdma_srq_size": 0, 00:13:51.515 "io_path_stat": false, 00:13:51.515 "allow_accel_sequence": false, 00:13:51.515 "rdma_max_cq_size": 0, 00:13:51.515 "rdma_cm_event_timeout_ms": 0, 00:13:51.515 "dhchap_digests": [ 00:13:51.515 "sha256", 00:13:51.515 "sha384", 00:13:51.515 "sha512" 00:13:51.515 ], 00:13:51.515 "dhchap_dhgroups": [ 00:13:51.515 "null", 00:13:51.515 "ffdhe2048", 00:13:51.515 "ffdhe3072", 00:13:51.515 "ffdhe4096", 00:13:51.515 "ffdhe6144", 00:13:51.516 "ffdhe8192" 00:13:51.516 ] 00:13:51.516 } 00:13:51.516 }, 00:13:51.516 { 00:13:51.516 "method": "bdev_nvme_set_hotplug", 00:13:51.516 "params": { 00:13:51.516 "period_us": 100000, 00:13:51.516 "enable": false 00:13:51.516 } 00:13:51.516 }, 00:13:51.516 { 00:13:51.516 "method": "bdev_malloc_create", 00:13:51.516 "params": { 00:13:51.516 "name": "malloc0", 00:13:51.516 "num_blocks": 8192, 00:13:51.516 "block_size": 4096, 00:13:51.516 "physical_block_size": 4096, 00:13:51.516 "uuid": "e5a41ced-b451-4035-9871-74e8e4fbe006", 00:13:51.516 "optimal_io_boundary": 0, 00:13:51.516 "md_size": 0, 00:13:51.516 "dif_type": 0, 00:13:51.516 "dif_is_head_of_md": false, 00:13:51.516 "dif_pi_format": 0 00:13:51.516 } 00:13:51.516 }, 00:13:51.516 { 00:13:51.516 "method": "bdev_wait_for_examine" 00:13:51.516 } 00:13:51.516 ] 00:13:51.516 }, 00:13:51.516 { 00:13:51.516 "subsystem": "scsi", 00:13:51.516 "config": null 00:13:51.516 }, 00:13:51.516 { 00:13:51.516 "subsystem": "scheduler", 00:13:51.516 "config": [ 00:13:51.516 { 00:13:51.516 "method": "framework_set_scheduler", 00:13:51.516 "params": { 00:13:51.516 "name": "static" 00:13:51.516 } 00:13:51.516 } 00:13:51.516 ] 00:13:51.516 }, 00:13:51.516 { 00:13:51.516 "subsystem": "vhost_scsi", 00:13:51.516 "config": [] 00:13:51.516 }, 00:13:51.516 { 00:13:51.516 "subsystem": "vhost_blk", 00:13:51.516 "config": [] 00:13:51.516 }, 00:13:51.516 { 00:13:51.516 "subsystem": "ublk", 00:13:51.516 "config": [ 00:13:51.516 { 00:13:51.516 "method": "ublk_create_target", 00:13:51.516 "params": { 00:13:51.516 "cpumask": "1" 00:13:51.516 } 00:13:51.516 }, 00:13:51.516 { 00:13:51.516 "method": "ublk_start_disk", 00:13:51.516 "params": { 00:13:51.516 "bdev_name": "malloc0", 00:13:51.516 "ublk_id": 0, 00:13:51.516 "num_queues": 1, 00:13:51.516 "queue_depth": 128 00:13:51.516 } 00:13:51.516 } 00:13:51.516 ] 00:13:51.516 }, 00:13:51.516 { 00:13:51.516 "subsystem": "nbd", 00:13:51.516 "config": [] 00:13:51.516 }, 00:13:51.516 { 00:13:51.516 "subsystem": "nvmf", 00:13:51.516 "config": [ 00:13:51.516 { 00:13:51.516 "method": "nvmf_set_config", 00:13:51.516 "params": { 00:13:51.516 "discovery_filter": "match_any", 00:13:51.516 "admin_cmd_passthru": { 00:13:51.516 "identify_ctrlr": false 00:13:51.516 }, 00:13:51.516 "dhchap_digests": [ 00:13:51.516 "sha256", 00:13:51.516 "sha384", 00:13:51.516 "sha512" 00:13:51.516 ], 00:13:51.516 "dhchap_dhgroups": [ 00:13:51.516 "null", 00:13:51.516 "ffdhe2048", 00:13:51.516 "ffdhe3072", 00:13:51.516 "ffdhe4096", 00:13:51.516 "ffdhe6144", 00:13:51.516 "ffdhe8192" 00:13:51.516 ] 00:13:51.516 } 00:13:51.516 }, 00:13:51.516 { 00:13:51.516 "method": "nvmf_set_max_subsystems", 00:13:51.516 "params": { 00:13:51.516 "max_subsystems": 1024 00:13:51.516 } 00:13:51.516 }, 00:13:51.516 { 00:13:51.516 "method": "nvmf_set_crdt", 00:13:51.516 "params": { 00:13:51.516 "crdt1": 0, 00:13:51.516 "crdt2": 0, 00:13:51.516 "crdt3": 0 00:13:51.516 } 00:13:51.516 } 00:13:51.516 ] 00:13:51.516 }, 00:13:51.516 { 00:13:51.516 "subsystem": "iscsi", 00:13:51.516 "config": [ 00:13:51.516 { 00:13:51.516 "method": "iscsi_set_options", 00:13:51.516 "params": { 00:13:51.516 "node_base": "iqn.2016-06.io.spdk", 00:13:51.516 "max_sessions": 128, 00:13:51.516 "max_connections_per_session": 2, 00:13:51.516 "max_queue_depth": 64, 00:13:51.516 "default_time2wait": 2, 00:13:51.516 "default_time2retain": 20, 00:13:51.516 "first_burst_length": 8192, 00:13:51.516 "immediate_data": true, 00:13:51.516 "allow_duplicated_isid": false, 00:13:51.516 "error_recovery_level": 0, 00:13:51.516 "nop_timeout": 60, 00:13:51.516 "nop_in_interval": 30, 00:13:51.516 "disable_chap": false, 00:13:51.516 "require_chap": false, 00:13:51.516 "mutual_chap": false, 00:13:51.516 "chap_group": 0, 00:13:51.516 "max_large_datain_per_connection": 64, 00:13:51.516 "max_r2t_per_connection": 4, 00:13:51.516 "pdu_pool_size": 36864, 00:13:51.516 "immediate_data_pool_size": 16384, 00:13:51.516 "data_out_pool_size": 2048 00:13:51.516 } 00:13:51.516 } 00:13:51.516 ] 00:13:51.516 } 00:13:51.516 ] 00:13:51.516 }' 00:13:51.516 23:19:15 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:13:51.516 [2024-11-17 23:19:15.313496] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:13:51.516 [2024-11-17 23:19:15.313623] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82088 ] 00:13:51.774 [2024-11-17 23:19:15.458697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.774 [2024-11-17 23:19:15.482223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.032 [2024-11-17 23:19:15.828898] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:52.033 [2024-11-17 23:19:15.829193] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:52.033 [2024-11-17 23:19:15.837015] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:52.033 [2024-11-17 23:19:15.837079] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:52.033 [2024-11-17 23:19:15.837086] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:52.033 [2024-11-17 23:19:15.837096] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:52.033 [2024-11-17 23:19:15.845982] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:52.033 [2024-11-17 23:19:15.846004] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:52.302 [2024-11-17 23:19:15.852916] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:52.302 [2024-11-17 23:19:15.853009] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:52.302 [2024-11-17 23:19:15.869904] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:52.564 23:19:16 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:52.564 23:19:16 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:13:52.564 23:19:16 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:13:52.564 23:19:16 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.564 23:19:16 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:52.564 23:19:16 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:13:52.564 23:19:16 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.564 23:19:16 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:52.564 23:19:16 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:13:52.564 23:19:16 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 82088 00:13:52.564 23:19:16 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 82088 ']' 00:13:52.564 23:19:16 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 82088 00:13:52.564 23:19:16 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:13:52.564 23:19:16 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:52.564 23:19:16 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82088 00:13:52.564 killing process with pid 82088 00:13:52.564 23:19:16 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:52.564 23:19:16 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:52.564 23:19:16 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82088' 00:13:52.564 23:19:16 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 82088 00:13:52.564 23:19:16 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 82088 00:13:52.823 [2024-11-17 23:19:16.421945] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:52.823 [2024-11-17 23:19:16.458916] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:52.823 [2024-11-17 23:19:16.459046] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:52.823 [2024-11-17 23:19:16.468902] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:52.823 [2024-11-17 23:19:16.468962] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:52.823 [2024-11-17 23:19:16.468979] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:52.823 [2024-11-17 23:19:16.469007] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:52.823 [2024-11-17 23:19:16.469146] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:53.081 23:19:16 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:13:53.081 ************************************ 00:13:53.081 END TEST test_save_ublk_config 00:13:53.081 ************************************ 00:13:53.081 00:13:53.081 real 0m3.532s 00:13:53.081 user 0m2.498s 00:13:53.081 sys 0m1.646s 00:13:53.081 23:19:16 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:53.081 23:19:16 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:53.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.081 23:19:16 ublk -- ublk/ublk.sh@139 -- # spdk_pid=82134 00:13:53.081 23:19:16 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:53.081 23:19:16 ublk -- ublk/ublk.sh@141 -- # waitforlisten 82134 00:13:53.081 23:19:16 ublk -- common/autotest_common.sh@835 -- # '[' -z 82134 ']' 00:13:53.081 23:19:16 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.081 23:19:16 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:53.081 23:19:16 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:53.081 23:19:16 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.081 23:19:16 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:53.081 23:19:16 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:53.339 [2024-11-17 23:19:16.949523] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:13:53.339 [2024-11-17 23:19:16.949648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82134 ] 00:13:53.340 [2024-11-17 23:19:17.095274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:53.340 [2024-11-17 23:19:17.120077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:53.340 [2024-11-17 23:19:17.120159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.280 23:19:17 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:54.280 23:19:17 ublk -- common/autotest_common.sh@868 -- # return 0 00:13:54.280 23:19:17 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:13:54.280 23:19:17 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:54.280 23:19:17 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:54.280 23:19:17 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:54.280 ************************************ 00:13:54.280 START TEST test_create_ublk 00:13:54.280 ************************************ 00:13:54.280 23:19:17 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:13:54.280 23:19:17 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:13:54.280 23:19:17 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.280 23:19:17 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:54.280 [2024-11-17 23:19:17.800903] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:54.280 [2024-11-17 23:19:17.802203] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:54.280 23:19:17 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.280 23:19:17 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:13:54.280 23:19:17 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:13:54.280 23:19:17 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.280 23:19:17 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:54.280 23:19:17 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.280 23:19:17 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:13:54.280 23:19:17 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:54.280 23:19:17 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.280 23:19:17 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:54.280 [2024-11-17 23:19:17.873027] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:54.280 [2024-11-17 23:19:17.873443] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:54.280 [2024-11-17 23:19:17.873455] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:54.280 [2024-11-17 23:19:17.873464] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:54.280 [2024-11-17 23:19:17.882132] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:54.280 [2024-11-17 23:19:17.882177] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:54.280 [2024-11-17 23:19:17.888909] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:54.280 [2024-11-17 23:19:17.889545] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:54.280 [2024-11-17 23:19:17.925915] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:54.280 23:19:17 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.280 23:19:17 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:13:54.280 23:19:17 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:13:54.280 23:19:17 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:13:54.280 23:19:17 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.280 23:19:17 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:54.280 23:19:17 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.280 23:19:17 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:13:54.280 { 00:13:54.280 "ublk_device": "/dev/ublkb0", 00:13:54.280 "id": 0, 00:13:54.280 "queue_depth": 512, 00:13:54.280 "num_queues": 4, 00:13:54.280 "bdev_name": "Malloc0" 00:13:54.280 } 00:13:54.280 ]' 00:13:54.280 23:19:17 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:13:54.280 23:19:17 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:54.280 23:19:17 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:13:54.280 23:19:18 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:13:54.280 23:19:18 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:13:54.280 23:19:18 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:13:54.280 23:19:18 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:13:54.280 23:19:18 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:13:54.280 23:19:18 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:13:54.538 23:19:18 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:54.538 23:19:18 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:13:54.538 23:19:18 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:13:54.538 23:19:18 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:13:54.538 23:19:18 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:13:54.538 23:19:18 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:13:54.538 23:19:18 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:13:54.538 23:19:18 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:13:54.538 23:19:18 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:13:54.538 23:19:18 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:13:54.538 23:19:18 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:54.538 23:19:18 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:54.538 23:19:18 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:13:54.538 fio: verification read phase will never start because write phase uses all of runtime 00:13:54.538 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:13:54.538 fio-3.35 00:13:54.538 Starting 1 process 00:14:04.515 00:14:04.515 fio_test: (groupid=0, jobs=1): err= 0: pid=82179: Sun Nov 17 23:19:28 2024 00:14:04.515 write: IOPS=18.1k, BW=70.7MiB/s (74.2MB/s)(707MiB/10001msec); 0 zone resets 00:14:04.515 clat (usec): min=31, max=4006, avg=54.39, stdev=93.37 00:14:04.515 lat (usec): min=32, max=4006, avg=54.87, stdev=93.39 00:14:04.515 clat percentiles (usec): 00:14:04.515 | 1.00th=[ 39], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 46], 00:14:04.515 | 30.00th=[ 47], 40.00th=[ 48], 50.00th=[ 49], 60.00th=[ 51], 00:14:04.515 | 70.00th=[ 52], 80.00th=[ 56], 90.00th=[ 62], 95.00th=[ 65], 00:14:04.515 | 99.00th=[ 77], 99.50th=[ 88], 99.90th=[ 1860], 99.95th=[ 2737], 00:14:04.515 | 99.99th=[ 3523] 00:14:04.515 bw ( KiB/s): min=62056, max=75944, per=99.95%, avg=72392.42, stdev=4768.78, samples=19 00:14:04.515 iops : min=15514, max=18986, avg=18098.21, stdev=1192.26, samples=19 00:14:04.515 lat (usec) : 50=58.61%, 100=41.01%, 250=0.19%, 500=0.03%, 750=0.01% 00:14:04.515 lat (usec) : 1000=0.01% 00:14:04.515 lat (msec) : 2=0.05%, 4=0.09%, 10=0.01% 00:14:04.515 cpu : usr=2.12%, sys=12.80%, ctx=181081, majf=0, minf=797 00:14:04.515 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:04.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.515 issued rwts: total=0,181086,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:04.515 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:04.515 00:14:04.515 Run status group 0 (all jobs): 00:14:04.515 WRITE: bw=70.7MiB/s (74.2MB/s), 70.7MiB/s-70.7MiB/s (74.2MB/s-74.2MB/s), io=707MiB (742MB), run=10001-10001msec 00:14:04.515 00:14:04.515 Disk stats (read/write): 00:14:04.515 ublkb0: ios=0/179288, merge=0/0, ticks=0/8276, in_queue=8277, util=98.86% 00:14:04.515 23:19:28 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:04.515 23:19:28 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.773 23:19:28 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.773 [2024-11-17 23:19:28.337225] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:04.773 [2024-11-17 23:19:28.377435] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:04.773 [2024-11-17 23:19:28.378255] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:04.773 [2024-11-17 23:19:28.389944] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:04.773 [2024-11-17 23:19:28.393129] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:04.773 [2024-11-17 23:19:28.393143] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:04.773 23:19:28 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.773 23:19:28 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:04.773 23:19:28 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:14:04.773 23:19:28 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:04.773 23:19:28 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:04.773 23:19:28 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:04.773 23:19:28 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:04.773 23:19:28 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:04.773 23:19:28 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:14:04.773 23:19:28 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.773 23:19:28 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.773 [2024-11-17 23:19:28.404998] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:04.773 request: 00:14:04.773 { 00:14:04.773 "ublk_id": 0, 00:14:04.773 "method": "ublk_stop_disk", 00:14:04.773 "req_id": 1 00:14:04.773 } 00:14:04.773 Got JSON-RPC error response 00:14:04.774 response: 00:14:04.774 { 00:14:04.774 "code": -19, 00:14:04.774 "message": "No such device" 00:14:04.774 } 00:14:04.774 23:19:28 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:04.774 23:19:28 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:14:04.774 23:19:28 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:04.774 23:19:28 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:04.774 23:19:28 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:04.774 23:19:28 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:04.774 23:19:28 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.774 23:19:28 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.774 [2024-11-17 23:19:28.420962] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:04.774 [2024-11-17 23:19:28.422585] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:04.774 [2024-11-17 23:19:28.422615] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:04.774 23:19:28 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.774 23:19:28 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:04.774 23:19:28 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.774 23:19:28 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.774 23:19:28 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.774 23:19:28 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:04.774 23:19:28 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:04.774 23:19:28 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.774 23:19:28 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.774 23:19:28 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.774 23:19:28 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:04.774 23:19:28 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:14:04.774 23:19:28 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:04.774 23:19:28 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:04.774 23:19:28 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.774 23:19:28 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.774 23:19:28 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.774 23:19:28 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:04.774 23:19:28 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:14:05.033 ************************************ 00:14:05.033 END TEST test_create_ublk 00:14:05.033 ************************************ 00:14:05.033 23:19:28 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:05.033 00:14:05.033 real 0m10.806s 00:14:05.033 user 0m0.492s 00:14:05.033 sys 0m1.368s 00:14:05.033 23:19:28 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:05.033 23:19:28 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.033 23:19:28 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:05.033 23:19:28 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:05.033 23:19:28 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:05.033 23:19:28 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.033 ************************************ 00:14:05.033 START TEST test_create_multi_ublk 00:14:05.033 ************************************ 00:14:05.033 23:19:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:14:05.033 23:19:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:05.033 23:19:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.033 23:19:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.033 [2024-11-17 23:19:28.652893] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:05.033 [2024-11-17 23:19:28.654037] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:05.033 23:19:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.033 23:19:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:14:05.033 23:19:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:14:05.033 23:19:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:05.033 23:19:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:05.033 23:19:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.033 23:19:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.033 23:19:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.033 23:19:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:05.033 23:19:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:05.033 23:19:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.033 23:19:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.033 [2024-11-17 23:19:28.761038] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:05.033 [2024-11-17 23:19:28.761357] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:05.033 [2024-11-17 23:19:28.761372] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:05.033 [2024-11-17 23:19:28.761377] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:05.033 [2024-11-17 23:19:28.772937] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:05.033 [2024-11-17 23:19:28.772958] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:05.033 [2024-11-17 23:19:28.784904] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:05.033 [2024-11-17 23:19:28.785416] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:05.033 [2024-11-17 23:19:28.798975] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:05.033 23:19:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.033 23:19:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:05.033 23:19:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:05.033 23:19:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:05.033 23:19:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.033 23:19:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.292 23:19:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.292 23:19:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:05.292 23:19:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:05.292 23:19:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.292 23:19:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.292 [2024-11-17 23:19:28.895008] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:05.292 [2024-11-17 23:19:28.895325] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:05.292 [2024-11-17 23:19:28.895335] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:05.292 [2024-11-17 23:19:28.895342] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:05.292 [2024-11-17 23:19:28.905898] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:05.292 [2024-11-17 23:19:28.905920] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:05.292 [2024-11-17 23:19:28.918891] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:05.292 [2024-11-17 23:19:28.919434] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:05.292 [2024-11-17 23:19:28.932147] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:05.292 23:19:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.292 23:19:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:05.292 23:19:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:05.292 23:19:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:05.292 23:19:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.292 23:19:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.292 23:19:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.292 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:05.292 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:05.292 23:19:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.292 23:19:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.292 [2024-11-17 23:19:29.038997] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:05.292 [2024-11-17 23:19:29.039313] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:05.292 [2024-11-17 23:19:29.039327] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:05.292 [2024-11-17 23:19:29.039332] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:05.292 [2024-11-17 23:19:29.050920] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:05.292 [2024-11-17 23:19:29.050939] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:05.292 [2024-11-17 23:19:29.062909] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:05.292 [2024-11-17 23:19:29.063446] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:05.292 [2024-11-17 23:19:29.066628] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:05.292 23:19:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.292 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:05.292 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:05.292 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:05.292 23:19:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.292 23:19:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.551 23:19:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.551 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:05.551 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:05.551 23:19:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.551 23:19:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.551 [2024-11-17 23:19:29.166997] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:05.551 [2024-11-17 23:19:29.167310] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:05.551 [2024-11-17 23:19:29.167323] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:05.551 [2024-11-17 23:19:29.167330] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:05.551 [2024-11-17 23:19:29.178922] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:05.551 [2024-11-17 23:19:29.178943] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:05.551 [2024-11-17 23:19:29.190904] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:05.551 [2024-11-17 23:19:29.191425] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:05.551 [2024-11-17 23:19:29.230911] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:05.551 23:19:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.551 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:05.551 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:05.551 23:19:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.551 23:19:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.551 23:19:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.551 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:05.551 { 00:14:05.551 "ublk_device": "/dev/ublkb0", 00:14:05.551 "id": 0, 00:14:05.551 "queue_depth": 512, 00:14:05.551 "num_queues": 4, 00:14:05.551 "bdev_name": "Malloc0" 00:14:05.551 }, 00:14:05.551 { 00:14:05.551 "ublk_device": "/dev/ublkb1", 00:14:05.551 "id": 1, 00:14:05.551 "queue_depth": 512, 00:14:05.551 "num_queues": 4, 00:14:05.551 "bdev_name": "Malloc1" 00:14:05.551 }, 00:14:05.551 { 00:14:05.551 "ublk_device": "/dev/ublkb2", 00:14:05.551 "id": 2, 00:14:05.551 "queue_depth": 512, 00:14:05.551 "num_queues": 4, 00:14:05.551 "bdev_name": "Malloc2" 00:14:05.551 }, 00:14:05.551 { 00:14:05.551 "ublk_device": "/dev/ublkb3", 00:14:05.551 "id": 3, 00:14:05.551 "queue_depth": 512, 00:14:05.551 "num_queues": 4, 00:14:05.551 "bdev_name": "Malloc3" 00:14:05.551 } 00:14:05.551 ]' 00:14:05.551 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:14:05.551 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:05.551 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:05.551 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:05.551 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:05.551 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:05.551 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:05.810 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:05.810 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:05.810 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:05.810 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:05.810 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:05.810 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:05.810 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:05.810 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:05.810 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:05.810 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:05.810 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:05.810 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:05.810 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:05.810 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:05.810 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:05.810 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:05.810 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:05.810 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:05.810 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:05.810 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:06.068 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:06.068 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:06.068 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:06.068 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:06.068 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:06.068 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:06.068 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:06.068 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:06.068 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:06.068 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:06.068 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:06.068 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:06.068 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:06.068 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:06.068 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:06.068 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:06.068 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:06.327 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:06.327 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:06.327 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:14:06.327 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:06.327 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:06.327 23:19:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.327 23:19:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.327 [2024-11-17 23:19:29.915010] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:06.327 [2024-11-17 23:19:29.944473] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:06.327 [2024-11-17 23:19:29.945353] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:06.327 [2024-11-17 23:19:29.953906] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:06.327 [2024-11-17 23:19:29.954141] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:06.327 [2024-11-17 23:19:29.954148] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:06.327 23:19:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.327 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:06.327 23:19:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:06.327 23:19:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.327 23:19:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.327 [2024-11-17 23:19:29.969981] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:06.327 [2024-11-17 23:19:30.002930] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:06.327 [2024-11-17 23:19:30.003614] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:06.327 [2024-11-17 23:19:30.012896] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:06.327 [2024-11-17 23:19:30.013146] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:06.327 [2024-11-17 23:19:30.013155] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:06.327 23:19:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.327 23:19:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:06.327 23:19:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:06.327 23:19:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.327 23:19:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.327 [2024-11-17 23:19:30.019980] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:06.327 [2024-11-17 23:19:30.058928] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:06.327 [2024-11-17 23:19:30.059547] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:06.327 [2024-11-17 23:19:30.061135] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:06.327 [2024-11-17 23:19:30.061358] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:06.327 [2024-11-17 23:19:30.061364] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:06.327 23:19:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.327 23:19:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:06.327 23:19:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:06.327 23:19:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.327 23:19:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.327 [2024-11-17 23:19:30.071981] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:06.327 [2024-11-17 23:19:30.103935] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:06.327 [2024-11-17 23:19:30.104516] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:06.327 [2024-11-17 23:19:30.111923] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:06.327 [2024-11-17 23:19:30.112148] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:06.327 [2024-11-17 23:19:30.112154] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:06.327 23:19:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.327 23:19:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:06.585 [2024-11-17 23:19:30.303981] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:06.585 [2024-11-17 23:19:30.305126] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:06.585 [2024-11-17 23:19:30.305161] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:06.585 23:19:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:14:06.585 23:19:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:06.585 23:19:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:06.585 23:19:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.585 23:19:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.585 23:19:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.585 23:19:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:06.585 23:19:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:06.585 23:19:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.585 23:19:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.844 23:19:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.844 23:19:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:06.844 23:19:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:06.844 23:19:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.844 23:19:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.844 23:19:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.844 23:19:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:06.844 23:19:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:06.844 23:19:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.844 23:19:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.844 23:19:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.844 23:19:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:14:06.844 23:19:30 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:06.844 23:19:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.844 23:19:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.844 23:19:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.844 23:19:30 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:06.844 23:19:30 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:14:06.844 23:19:30 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:07.102 23:19:30 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:07.102 23:19:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.102 23:19:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:07.102 23:19:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.102 23:19:30 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:07.102 23:19:30 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:14:07.102 ************************************ 00:14:07.102 END TEST test_create_multi_ublk 00:14:07.102 ************************************ 00:14:07.102 23:19:30 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:07.102 00:14:07.102 real 0m2.083s 00:14:07.102 user 0m0.811s 00:14:07.102 sys 0m0.155s 00:14:07.102 23:19:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:07.102 23:19:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:07.102 23:19:30 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:14:07.102 23:19:30 ublk -- ublk/ublk.sh@147 -- # cleanup 00:14:07.102 23:19:30 ublk -- ublk/ublk.sh@130 -- # killprocess 82134 00:14:07.102 23:19:30 ublk -- common/autotest_common.sh@954 -- # '[' -z 82134 ']' 00:14:07.102 23:19:30 ublk -- common/autotest_common.sh@958 -- # kill -0 82134 00:14:07.102 23:19:30 ublk -- common/autotest_common.sh@959 -- # uname 00:14:07.102 23:19:30 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:07.102 23:19:30 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82134 00:14:07.102 killing process with pid 82134 00:14:07.102 23:19:30 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:07.102 23:19:30 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:07.102 23:19:30 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82134' 00:14:07.102 23:19:30 ublk -- common/autotest_common.sh@973 -- # kill 82134 00:14:07.102 23:19:30 ublk -- common/autotest_common.sh@978 -- # wait 82134 00:14:07.368 [2024-11-17 23:19:31.010791] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:07.368 [2024-11-17 23:19:31.010865] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:07.629 00:14:07.629 real 0m18.207s 00:14:07.629 user 0m27.611s 00:14:07.629 sys 0m8.350s 00:14:07.629 23:19:31 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:07.629 23:19:31 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:07.629 ************************************ 00:14:07.629 END TEST ublk 00:14:07.629 ************************************ 00:14:07.629 23:19:31 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:07.629 23:19:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:07.629 23:19:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:07.629 23:19:31 -- common/autotest_common.sh@10 -- # set +x 00:14:07.629 ************************************ 00:14:07.629 START TEST ublk_recovery 00:14:07.629 ************************************ 00:14:07.629 23:19:31 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:07.629 * Looking for test storage... 00:14:07.891 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:07.891 23:19:31 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:07.891 23:19:31 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:07.891 23:19:31 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:14:07.891 23:19:31 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:07.891 23:19:31 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:07.891 23:19:31 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:07.891 23:19:31 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:07.891 23:19:31 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:07.891 23:19:31 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:07.891 23:19:31 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:07.891 23:19:31 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:07.891 23:19:31 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:07.891 23:19:31 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:07.891 23:19:31 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:07.891 23:19:31 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:07.891 23:19:31 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:14:07.891 23:19:31 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:14:07.891 23:19:31 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:07.891 23:19:31 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:07.891 23:19:31 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:14:07.891 23:19:31 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:14:07.891 23:19:31 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:07.891 23:19:31 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:14:07.891 23:19:31 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:07.891 23:19:31 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:14:07.891 23:19:31 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:14:07.891 23:19:31 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:07.891 23:19:31 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:14:07.891 23:19:31 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:07.891 23:19:31 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:07.891 23:19:31 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:07.891 23:19:31 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:14:07.891 23:19:31 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:07.891 23:19:31 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:07.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.891 --rc genhtml_branch_coverage=1 00:14:07.891 --rc genhtml_function_coverage=1 00:14:07.891 --rc genhtml_legend=1 00:14:07.891 --rc geninfo_all_blocks=1 00:14:07.891 --rc geninfo_unexecuted_blocks=1 00:14:07.891 00:14:07.891 ' 00:14:07.891 23:19:31 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:07.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.891 --rc genhtml_branch_coverage=1 00:14:07.891 --rc genhtml_function_coverage=1 00:14:07.891 --rc genhtml_legend=1 00:14:07.891 --rc geninfo_all_blocks=1 00:14:07.891 --rc geninfo_unexecuted_blocks=1 00:14:07.891 00:14:07.891 ' 00:14:07.891 23:19:31 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:07.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.891 --rc genhtml_branch_coverage=1 00:14:07.891 --rc genhtml_function_coverage=1 00:14:07.891 --rc genhtml_legend=1 00:14:07.891 --rc geninfo_all_blocks=1 00:14:07.891 --rc geninfo_unexecuted_blocks=1 00:14:07.891 00:14:07.891 ' 00:14:07.891 23:19:31 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:07.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.891 --rc genhtml_branch_coverage=1 00:14:07.891 --rc genhtml_function_coverage=1 00:14:07.891 --rc genhtml_legend=1 00:14:07.891 --rc geninfo_all_blocks=1 00:14:07.891 --rc geninfo_unexecuted_blocks=1 00:14:07.891 00:14:07.891 ' 00:14:07.891 23:19:31 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:07.891 23:19:31 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:07.891 23:19:31 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:07.891 23:19:31 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:07.891 23:19:31 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:07.891 23:19:31 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:07.891 23:19:31 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:07.891 23:19:31 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:07.891 23:19:31 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:07.891 23:19:31 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:07.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.891 23:19:31 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=82509 00:14:07.891 23:19:31 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:07.891 23:19:31 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 82509 00:14:07.891 23:19:31 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 82509 ']' 00:14:07.891 23:19:31 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.891 23:19:31 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:07.891 23:19:31 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.891 23:19:31 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:07.891 23:19:31 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:07.891 23:19:31 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:07.891 [2024-11-17 23:19:31.597220] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:14:07.891 [2024-11-17 23:19:31.597339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82509 ] 00:14:08.153 [2024-11-17 23:19:31.743280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:08.153 [2024-11-17 23:19:31.769223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.153 [2024-11-17 23:19:31.769248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.726 23:19:32 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:08.726 23:19:32 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:14:08.726 23:19:32 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:14:08.726 23:19:32 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.726 23:19:32 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:08.726 [2024-11-17 23:19:32.426903] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:08.726 [2024-11-17 23:19:32.428236] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:08.726 23:19:32 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.726 23:19:32 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:08.726 23:19:32 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.726 23:19:32 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:08.726 malloc0 00:14:08.726 23:19:32 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.726 23:19:32 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:14:08.726 23:19:32 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.726 23:19:32 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:08.726 [2024-11-17 23:19:32.467044] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:14:08.726 [2024-11-17 23:19:32.467149] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:14:08.726 [2024-11-17 23:19:32.467157] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:08.726 [2024-11-17 23:19:32.467166] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:08.726 [2024-11-17 23:19:32.476029] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:08.726 [2024-11-17 23:19:32.476055] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:08.726 [2024-11-17 23:19:32.482911] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:08.726 [2024-11-17 23:19:32.483064] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:08.726 [2024-11-17 23:19:32.494985] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:08.726 1 00:14:08.726 23:19:32 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.726 23:19:32 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:10.106 23:19:33 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=82537 00:14:10.106 23:19:33 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:10.106 23:19:33 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:10.106 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:10.106 fio-3.35 00:14:10.106 Starting 1 process 00:14:15.397 23:19:38 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 82509 00:14:15.397 23:19:38 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:20.698 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 82509 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:20.698 23:19:43 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=82647 00:14:20.698 23:19:43 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:20.698 23:19:43 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 82647 00:14:20.698 23:19:43 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 82647 ']' 00:14:20.698 23:19:43 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:20.698 23:19:43 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.698 23:19:43 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:20.698 23:19:43 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.698 23:19:43 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:20.698 23:19:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:20.698 [2024-11-17 23:19:43.579359] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:14:20.698 [2024-11-17 23:19:43.579620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82647 ] 00:14:20.698 [2024-11-17 23:19:43.716387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:20.698 [2024-11-17 23:19:43.741021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.698 [2024-11-17 23:19:43.741096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.698 23:19:44 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:20.698 23:19:44 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:14:20.698 23:19:44 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:20.698 23:19:44 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.698 23:19:44 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:20.698 [2024-11-17 23:19:44.370899] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:20.698 [2024-11-17 23:19:44.372185] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:20.698 23:19:44 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.698 23:19:44 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:20.698 23:19:44 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.698 23:19:44 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:20.698 malloc0 00:14:20.699 23:19:44 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.699 23:19:44 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:20.699 23:19:44 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.699 23:19:44 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:20.699 [2024-11-17 23:19:44.411010] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:20.699 [2024-11-17 23:19:44.411043] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:20.699 [2024-11-17 23:19:44.411050] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:20.699 [2024-11-17 23:19:44.417903] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:20.699 [2024-11-17 23:19:44.417925] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:20.699 1 00:14:20.699 23:19:44 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.699 23:19:44 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 82537 00:14:21.633 [2024-11-17 23:19:45.417967] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:21.633 [2024-11-17 23:19:45.421908] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:21.633 [2024-11-17 23:19:45.421924] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:23.011 [2024-11-17 23:19:46.422910] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:23.011 [2024-11-17 23:19:46.430907] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:23.011 [2024-11-17 23:19:46.430922] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:23.947 [2024-11-17 23:19:47.430958] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:23.947 [2024-11-17 23:19:47.438903] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:23.947 [2024-11-17 23:19:47.438922] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:23.947 [2024-11-17 23:19:47.438931] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:23.947 [2024-11-17 23:19:47.439002] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:45.892 [2024-11-17 23:20:08.796912] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:45.892 [2024-11-17 23:20:08.803534] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:45.892 [2024-11-17 23:20:08.811105] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:45.892 [2024-11-17 23:20:08.811126] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:12.452 00:15:12.452 fio_test: (groupid=0, jobs=1): err= 0: pid=82544: Sun Nov 17 23:20:33 2024 00:15:12.452 read: IOPS=13.9k, BW=54.3MiB/s (57.0MB/s)(3261MiB/60004msec) 00:15:12.452 slat (nsec): min=1291, max=331409, avg=5584.62, stdev=1833.95 00:15:12.452 clat (usec): min=690, max=30318k, avg=4625.91, stdev=269538.07 00:15:12.452 lat (usec): min=696, max=30318k, avg=4631.50, stdev=269538.07 00:15:12.452 clat percentiles (usec): 00:15:12.452 | 1.00th=[ 1778], 5.00th=[ 1942], 10.00th=[ 2008], 20.00th=[ 2040], 00:15:12.452 | 30.00th=[ 2073], 40.00th=[ 2089], 50.00th=[ 2114], 60.00th=[ 2114], 00:15:12.452 | 70.00th=[ 2147], 80.00th=[ 2147], 90.00th=[ 2212], 95.00th=[ 3228], 00:15:12.452 | 99.00th=[ 5276], 99.50th=[ 5669], 99.90th=[ 7177], 99.95th=[ 7832], 00:15:12.452 | 99.99th=[12780] 00:15:12.452 bw ( KiB/s): min=46400, max=122080, per=100.00%, avg=111375.46, stdev=12813.58, samples=59 00:15:12.452 iops : min=11600, max=30520, avg=27843.86, stdev=3203.40, samples=59 00:15:12.452 write: IOPS=13.9k, BW=54.3MiB/s (56.9MB/s)(3257MiB/60004msec); 0 zone resets 00:15:12.452 slat (nsec): min=1420, max=278369, avg=5841.08, stdev=1841.84 00:15:12.452 clat (usec): min=701, max=30318k, avg=4567.95, stdev=261416.22 00:15:12.452 lat (usec): min=707, max=30318k, avg=4573.79, stdev=261416.22 00:15:12.452 clat percentiles (usec): 00:15:12.452 | 1.00th=[ 1811], 5.00th=[ 2024], 10.00th=[ 2089], 20.00th=[ 2147], 00:15:12.452 | 30.00th=[ 2180], 40.00th=[ 2180], 50.00th=[ 2212], 60.00th=[ 2212], 00:15:12.452 | 70.00th=[ 2245], 80.00th=[ 2278], 90.00th=[ 2311], 95.00th=[ 3163], 00:15:12.452 | 99.00th=[ 5276], 99.50th=[ 5800], 99.90th=[ 7242], 99.95th=[ 7963], 00:15:12.452 | 99.99th=[13042] 00:15:12.452 bw ( KiB/s): min=46416, max=121568, per=100.00%, avg=111245.29, stdev=12788.86, samples=59 00:15:12.452 iops : min=11604, max=30392, avg=27811.32, stdev=3197.22, samples=59 00:15:12.452 lat (usec) : 750=0.01%, 1000=0.01% 00:15:12.452 lat (msec) : 2=6.93%, 4=90.03%, 10=3.02%, 20=0.02%, >=2000=0.01% 00:15:12.452 cpu : usr=3.14%, sys=16.14%, ctx=54958, majf=0, minf=14 00:15:12.452 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:12.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:12.452 issued rwts: total=834801,833701,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:12.452 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:12.452 00:15:12.452 Run status group 0 (all jobs): 00:15:12.452 READ: bw=54.3MiB/s (57.0MB/s), 54.3MiB/s-54.3MiB/s (57.0MB/s-57.0MB/s), io=3261MiB (3419MB), run=60004-60004msec 00:15:12.452 WRITE: bw=54.3MiB/s (56.9MB/s), 54.3MiB/s-54.3MiB/s (56.9MB/s-56.9MB/s), io=3257MiB (3415MB), run=60004-60004msec 00:15:12.452 00:15:12.452 Disk stats (read/write): 00:15:12.452 ublkb1: ios=831695/830577, merge=0/0, ticks=3803280/3678197, in_queue=7481478, util=99.88% 00:15:12.452 23:20:33 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:12.452 23:20:33 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.452 23:20:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:12.452 [2024-11-17 23:20:33.752479] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:12.452 [2024-11-17 23:20:33.791927] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:12.452 [2024-11-17 23:20:33.792094] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:12.452 [2024-11-17 23:20:33.799918] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:12.452 [2024-11-17 23:20:33.800076] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:12.452 [2024-11-17 23:20:33.800143] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:12.452 23:20:33 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.452 23:20:33 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:12.452 23:20:33 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.452 23:20:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:12.452 [2024-11-17 23:20:33.815964] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:12.452 [2024-11-17 23:20:33.817561] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:12.452 [2024-11-17 23:20:33.817590] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:12.452 23:20:33 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.452 23:20:33 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:12.452 23:20:33 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:12.452 23:20:33 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 82647 00:15:12.452 23:20:33 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 82647 ']' 00:15:12.452 23:20:33 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 82647 00:15:12.452 23:20:33 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:15:12.452 23:20:33 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:12.452 23:20:33 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82647 00:15:12.452 killing process with pid 82647 00:15:12.452 23:20:33 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:12.452 23:20:33 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:12.452 23:20:33 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82647' 00:15:12.452 23:20:33 ublk_recovery -- common/autotest_common.sh@973 -- # kill 82647 00:15:12.452 23:20:33 ublk_recovery -- common/autotest_common.sh@978 -- # wait 82647 00:15:12.452 [2024-11-17 23:20:34.081915] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:12.452 [2024-11-17 23:20:34.081958] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:12.452 ************************************ 00:15:12.452 END TEST ublk_recovery 00:15:12.452 ************************************ 00:15:12.452 00:15:12.452 real 1m3.038s 00:15:12.452 user 1m43.684s 00:15:12.452 sys 0m23.402s 00:15:12.452 23:20:34 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:12.452 23:20:34 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:12.452 23:20:34 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:15:12.452 23:20:34 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:15:12.452 23:20:34 -- spdk/autotest.sh@260 -- # timing_exit lib 00:15:12.452 23:20:34 -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:12.452 23:20:34 -- common/autotest_common.sh@10 -- # set +x 00:15:12.452 23:20:34 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:15:12.452 23:20:34 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:15:12.452 23:20:34 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:15:12.452 23:20:34 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:15:12.452 23:20:34 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:15:12.452 23:20:34 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:15:12.452 23:20:34 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:15:12.452 23:20:34 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:15:12.452 23:20:34 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:15:12.452 23:20:34 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:15:12.452 23:20:34 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:12.452 23:20:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:12.452 23:20:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:12.452 23:20:34 -- common/autotest_common.sh@10 -- # set +x 00:15:12.452 ************************************ 00:15:12.452 START TEST ftl 00:15:12.452 ************************************ 00:15:12.452 23:20:34 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:12.452 * Looking for test storage... 00:15:12.452 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:12.452 23:20:34 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:12.452 23:20:34 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:15:12.452 23:20:34 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:12.452 23:20:34 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:12.452 23:20:34 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:12.452 23:20:34 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:12.452 23:20:34 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:12.452 23:20:34 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:15:12.452 23:20:34 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:15:12.452 23:20:34 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:15:12.452 23:20:34 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:15:12.452 23:20:34 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:15:12.452 23:20:34 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:15:12.452 23:20:34 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:15:12.452 23:20:34 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:12.452 23:20:34 ftl -- scripts/common.sh@344 -- # case "$op" in 00:15:12.452 23:20:34 ftl -- scripts/common.sh@345 -- # : 1 00:15:12.452 23:20:34 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:12.452 23:20:34 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:12.452 23:20:34 ftl -- scripts/common.sh@365 -- # decimal 1 00:15:12.452 23:20:34 ftl -- scripts/common.sh@353 -- # local d=1 00:15:12.452 23:20:34 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:12.452 23:20:34 ftl -- scripts/common.sh@355 -- # echo 1 00:15:12.452 23:20:34 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:15:12.452 23:20:34 ftl -- scripts/common.sh@366 -- # decimal 2 00:15:12.452 23:20:34 ftl -- scripts/common.sh@353 -- # local d=2 00:15:12.452 23:20:34 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:12.452 23:20:34 ftl -- scripts/common.sh@355 -- # echo 2 00:15:12.452 23:20:34 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:15:12.452 23:20:34 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:12.452 23:20:34 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:12.452 23:20:34 ftl -- scripts/common.sh@368 -- # return 0 00:15:12.452 23:20:34 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:12.452 23:20:34 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:12.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.452 --rc genhtml_branch_coverage=1 00:15:12.452 --rc genhtml_function_coverage=1 00:15:12.452 --rc genhtml_legend=1 00:15:12.452 --rc geninfo_all_blocks=1 00:15:12.452 --rc geninfo_unexecuted_blocks=1 00:15:12.452 00:15:12.452 ' 00:15:12.453 23:20:34 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:12.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.453 --rc genhtml_branch_coverage=1 00:15:12.453 --rc genhtml_function_coverage=1 00:15:12.453 --rc genhtml_legend=1 00:15:12.453 --rc geninfo_all_blocks=1 00:15:12.453 --rc geninfo_unexecuted_blocks=1 00:15:12.453 00:15:12.453 ' 00:15:12.453 23:20:34 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:12.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.453 --rc genhtml_branch_coverage=1 00:15:12.453 --rc genhtml_function_coverage=1 00:15:12.453 --rc genhtml_legend=1 00:15:12.453 --rc geninfo_all_blocks=1 00:15:12.453 --rc geninfo_unexecuted_blocks=1 00:15:12.453 00:15:12.453 ' 00:15:12.453 23:20:34 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:12.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.453 --rc genhtml_branch_coverage=1 00:15:12.453 --rc genhtml_function_coverage=1 00:15:12.453 --rc genhtml_legend=1 00:15:12.453 --rc geninfo_all_blocks=1 00:15:12.453 --rc geninfo_unexecuted_blocks=1 00:15:12.453 00:15:12.453 ' 00:15:12.453 23:20:34 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:12.453 23:20:34 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:12.453 23:20:34 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:12.453 23:20:34 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:12.453 23:20:34 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:12.453 23:20:34 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:12.453 23:20:34 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:12.453 23:20:34 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:12.453 23:20:34 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:12.453 23:20:34 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:12.453 23:20:34 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:12.453 23:20:34 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:12.453 23:20:34 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:12.453 23:20:34 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:12.453 23:20:34 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:12.453 23:20:34 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:12.453 23:20:34 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:12.453 23:20:34 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:12.453 23:20:34 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:12.453 23:20:34 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:12.453 23:20:34 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:12.453 23:20:34 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:12.453 23:20:34 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:12.453 23:20:34 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:12.453 23:20:34 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:12.453 23:20:34 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:12.453 23:20:34 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:12.453 23:20:34 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:12.453 23:20:34 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:12.453 23:20:34 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:12.453 23:20:34 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:12.453 23:20:34 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:12.453 23:20:34 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:12.453 23:20:34 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:12.453 23:20:34 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:12.453 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:12.453 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:12.453 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:12.453 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:12.453 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:12.453 23:20:35 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=83441 00:15:12.453 23:20:35 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:12.453 23:20:35 ftl -- ftl/ftl.sh@38 -- # waitforlisten 83441 00:15:12.453 23:20:35 ftl -- common/autotest_common.sh@835 -- # '[' -z 83441 ']' 00:15:12.453 23:20:35 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.453 23:20:35 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:12.453 23:20:35 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.453 23:20:35 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:12.453 23:20:35 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:12.453 [2024-11-17 23:20:35.164356] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:15:12.453 [2024-11-17 23:20:35.164637] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83441 ] 00:15:12.453 [2024-11-17 23:20:35.299993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.453 [2024-11-17 23:20:35.322691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.453 23:20:36 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:12.453 23:20:36 ftl -- common/autotest_common.sh@868 -- # return 0 00:15:12.453 23:20:36 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:12.453 23:20:36 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:13.019 23:20:36 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:13.019 23:20:36 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:13.278 23:20:37 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:13.278 23:20:37 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:13.278 23:20:37 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:13.536 23:20:37 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:15:13.536 23:20:37 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:13.536 23:20:37 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:15:13.536 23:20:37 ftl -- ftl/ftl.sh@50 -- # break 00:15:13.536 23:20:37 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:15:13.536 23:20:37 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:13.536 23:20:37 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:13.536 23:20:37 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:13.794 23:20:37 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:15:13.794 23:20:37 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:13.794 23:20:37 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:15:13.794 23:20:37 ftl -- ftl/ftl.sh@63 -- # break 00:15:13.794 23:20:37 ftl -- ftl/ftl.sh@66 -- # killprocess 83441 00:15:13.794 23:20:37 ftl -- common/autotest_common.sh@954 -- # '[' -z 83441 ']' 00:15:13.794 23:20:37 ftl -- common/autotest_common.sh@958 -- # kill -0 83441 00:15:13.794 23:20:37 ftl -- common/autotest_common.sh@959 -- # uname 00:15:13.794 23:20:37 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:13.794 23:20:37 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83441 00:15:13.794 killing process with pid 83441 00:15:13.794 23:20:37 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:13.794 23:20:37 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:13.794 23:20:37 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83441' 00:15:13.794 23:20:37 ftl -- common/autotest_common.sh@973 -- # kill 83441 00:15:13.794 23:20:37 ftl -- common/autotest_common.sh@978 -- # wait 83441 00:15:14.054 23:20:37 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:15:14.054 23:20:37 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:14.054 23:20:37 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:14.054 23:20:37 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:14.054 23:20:37 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:14.054 ************************************ 00:15:14.054 START TEST ftl_fio_basic 00:15:14.054 ************************************ 00:15:14.054 23:20:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:14.054 * Looking for test storage... 00:15:14.054 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:14.054 23:20:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:14.054 23:20:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:15:14.054 23:20:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:14.314 23:20:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:14.314 23:20:37 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:14.314 23:20:37 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:14.314 23:20:37 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:14.314 23:20:37 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:15:14.314 23:20:37 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:15:14.314 23:20:37 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:15:14.314 23:20:37 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:14.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.315 --rc genhtml_branch_coverage=1 00:15:14.315 --rc genhtml_function_coverage=1 00:15:14.315 --rc genhtml_legend=1 00:15:14.315 --rc geninfo_all_blocks=1 00:15:14.315 --rc geninfo_unexecuted_blocks=1 00:15:14.315 00:15:14.315 ' 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:14.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.315 --rc genhtml_branch_coverage=1 00:15:14.315 --rc genhtml_function_coverage=1 00:15:14.315 --rc genhtml_legend=1 00:15:14.315 --rc geninfo_all_blocks=1 00:15:14.315 --rc geninfo_unexecuted_blocks=1 00:15:14.315 00:15:14.315 ' 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:14.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.315 --rc genhtml_branch_coverage=1 00:15:14.315 --rc genhtml_function_coverage=1 00:15:14.315 --rc genhtml_legend=1 00:15:14.315 --rc geninfo_all_blocks=1 00:15:14.315 --rc geninfo_unexecuted_blocks=1 00:15:14.315 00:15:14.315 ' 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:14.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.315 --rc genhtml_branch_coverage=1 00:15:14.315 --rc genhtml_function_coverage=1 00:15:14.315 --rc genhtml_legend=1 00:15:14.315 --rc geninfo_all_blocks=1 00:15:14.315 --rc geninfo_unexecuted_blocks=1 00:15:14.315 00:15:14.315 ' 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=83562 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 83562 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 83562 ']' 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:14.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:14.315 23:20:37 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:14.315 [2024-11-17 23:20:37.989299] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:15:14.315 [2024-11-17 23:20:37.989849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83562 ] 00:15:14.315 [2024-11-17 23:20:38.130537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:14.573 [2024-11-17 23:20:38.154712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:14.573 [2024-11-17 23:20:38.154909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.574 [2024-11-17 23:20:38.154962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:15.139 23:20:38 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:15.139 23:20:38 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:15:15.139 23:20:38 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:15.139 23:20:38 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:15:15.139 23:20:38 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:15.139 23:20:38 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:15:15.139 23:20:38 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:15:15.139 23:20:38 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:15.402 23:20:39 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:15.402 23:20:39 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:15:15.402 23:20:39 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:15.402 23:20:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:15:15.402 23:20:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:15.402 23:20:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:15:15.402 23:20:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:15:15.402 23:20:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:15.660 23:20:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:15.660 { 00:15:15.660 "name": "nvme0n1", 00:15:15.660 "aliases": [ 00:15:15.660 "cb02b233-31d4-4a9e-ad0c-ea28f816c7c2" 00:15:15.660 ], 00:15:15.660 "product_name": "NVMe disk", 00:15:15.660 "block_size": 4096, 00:15:15.660 "num_blocks": 1310720, 00:15:15.660 "uuid": "cb02b233-31d4-4a9e-ad0c-ea28f816c7c2", 00:15:15.660 "numa_id": -1, 00:15:15.660 "assigned_rate_limits": { 00:15:15.660 "rw_ios_per_sec": 0, 00:15:15.660 "rw_mbytes_per_sec": 0, 00:15:15.660 "r_mbytes_per_sec": 0, 00:15:15.660 "w_mbytes_per_sec": 0 00:15:15.660 }, 00:15:15.660 "claimed": false, 00:15:15.660 "zoned": false, 00:15:15.660 "supported_io_types": { 00:15:15.660 "read": true, 00:15:15.660 "write": true, 00:15:15.660 "unmap": true, 00:15:15.660 "flush": true, 00:15:15.660 "reset": true, 00:15:15.660 "nvme_admin": true, 00:15:15.660 "nvme_io": true, 00:15:15.660 "nvme_io_md": false, 00:15:15.660 "write_zeroes": true, 00:15:15.660 "zcopy": false, 00:15:15.660 "get_zone_info": false, 00:15:15.660 "zone_management": false, 00:15:15.660 "zone_append": false, 00:15:15.660 "compare": true, 00:15:15.660 "compare_and_write": false, 00:15:15.660 "abort": true, 00:15:15.660 "seek_hole": false, 00:15:15.660 "seek_data": false, 00:15:15.660 "copy": true, 00:15:15.660 "nvme_iov_md": false 00:15:15.660 }, 00:15:15.660 "driver_specific": { 00:15:15.660 "nvme": [ 00:15:15.660 { 00:15:15.660 "pci_address": "0000:00:11.0", 00:15:15.660 "trid": { 00:15:15.660 "trtype": "PCIe", 00:15:15.660 "traddr": "0000:00:11.0" 00:15:15.660 }, 00:15:15.660 "ctrlr_data": { 00:15:15.660 "cntlid": 0, 00:15:15.660 "vendor_id": "0x1b36", 00:15:15.660 "model_number": "QEMU NVMe Ctrl", 00:15:15.660 "serial_number": "12341", 00:15:15.660 "firmware_revision": "8.0.0", 00:15:15.660 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:15.660 "oacs": { 00:15:15.660 "security": 0, 00:15:15.660 "format": 1, 00:15:15.660 "firmware": 0, 00:15:15.660 "ns_manage": 1 00:15:15.660 }, 00:15:15.660 "multi_ctrlr": false, 00:15:15.660 "ana_reporting": false 00:15:15.660 }, 00:15:15.660 "vs": { 00:15:15.660 "nvme_version": "1.4" 00:15:15.660 }, 00:15:15.660 "ns_data": { 00:15:15.660 "id": 1, 00:15:15.660 "can_share": false 00:15:15.660 } 00:15:15.660 } 00:15:15.660 ], 00:15:15.660 "mp_policy": "active_passive" 00:15:15.660 } 00:15:15.660 } 00:15:15.660 ]' 00:15:15.660 23:20:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:15.660 23:20:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:15:15.660 23:20:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:15.660 23:20:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:15:15.660 23:20:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:15:15.660 23:20:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:15:15.660 23:20:39 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:15:15.660 23:20:39 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:15.660 23:20:39 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:15:15.660 23:20:39 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:15.660 23:20:39 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:15.918 23:20:39 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:15:15.918 23:20:39 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:16.175 23:20:39 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=a70ed175-9f91-48d0-a37a-71a6d9a41f1b 00:15:16.175 23:20:39 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a70ed175-9f91-48d0-a37a-71a6d9a41f1b 00:15:16.175 23:20:39 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=391913ef-e985-48b8-8a11-c669d482e979 00:15:16.176 23:20:39 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 391913ef-e985-48b8-8a11-c669d482e979 00:15:16.176 23:20:39 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:15:16.176 23:20:39 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:16.176 23:20:39 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=391913ef-e985-48b8-8a11-c669d482e979 00:15:16.176 23:20:39 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:15:16.176 23:20:39 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 391913ef-e985-48b8-8a11-c669d482e979 00:15:16.176 23:20:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=391913ef-e985-48b8-8a11-c669d482e979 00:15:16.176 23:20:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:16.176 23:20:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:15:16.176 23:20:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:15:16.176 23:20:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 391913ef-e985-48b8-8a11-c669d482e979 00:15:16.434 23:20:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:16.434 { 00:15:16.434 "name": "391913ef-e985-48b8-8a11-c669d482e979", 00:15:16.434 "aliases": [ 00:15:16.434 "lvs/nvme0n1p0" 00:15:16.434 ], 00:15:16.434 "product_name": "Logical Volume", 00:15:16.434 "block_size": 4096, 00:15:16.434 "num_blocks": 26476544, 00:15:16.434 "uuid": "391913ef-e985-48b8-8a11-c669d482e979", 00:15:16.434 "assigned_rate_limits": { 00:15:16.434 "rw_ios_per_sec": 0, 00:15:16.434 "rw_mbytes_per_sec": 0, 00:15:16.434 "r_mbytes_per_sec": 0, 00:15:16.434 "w_mbytes_per_sec": 0 00:15:16.434 }, 00:15:16.434 "claimed": false, 00:15:16.434 "zoned": false, 00:15:16.434 "supported_io_types": { 00:15:16.434 "read": true, 00:15:16.434 "write": true, 00:15:16.434 "unmap": true, 00:15:16.434 "flush": false, 00:15:16.434 "reset": true, 00:15:16.434 "nvme_admin": false, 00:15:16.434 "nvme_io": false, 00:15:16.434 "nvme_io_md": false, 00:15:16.434 "write_zeroes": true, 00:15:16.434 "zcopy": false, 00:15:16.434 "get_zone_info": false, 00:15:16.434 "zone_management": false, 00:15:16.434 "zone_append": false, 00:15:16.434 "compare": false, 00:15:16.434 "compare_and_write": false, 00:15:16.434 "abort": false, 00:15:16.434 "seek_hole": true, 00:15:16.434 "seek_data": true, 00:15:16.434 "copy": false, 00:15:16.434 "nvme_iov_md": false 00:15:16.434 }, 00:15:16.434 "driver_specific": { 00:15:16.434 "lvol": { 00:15:16.434 "lvol_store_uuid": "a70ed175-9f91-48d0-a37a-71a6d9a41f1b", 00:15:16.434 "base_bdev": "nvme0n1", 00:15:16.434 "thin_provision": true, 00:15:16.434 "num_allocated_clusters": 0, 00:15:16.434 "snapshot": false, 00:15:16.434 "clone": false, 00:15:16.434 "esnap_clone": false 00:15:16.434 } 00:15:16.434 } 00:15:16.434 } 00:15:16.434 ]' 00:15:16.434 23:20:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:16.434 23:20:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:15:16.434 23:20:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:16.434 23:20:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:15:16.434 23:20:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:15:16.434 23:20:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:15:16.434 23:20:40 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:15:16.434 23:20:40 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:15:16.434 23:20:40 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:16.692 23:20:40 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:16.692 23:20:40 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:16.692 23:20:40 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 391913ef-e985-48b8-8a11-c669d482e979 00:15:16.692 23:20:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=391913ef-e985-48b8-8a11-c669d482e979 00:15:16.692 23:20:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:16.692 23:20:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:15:16.692 23:20:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:15:16.692 23:20:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 391913ef-e985-48b8-8a11-c669d482e979 00:15:16.950 23:20:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:16.950 { 00:15:16.950 "name": "391913ef-e985-48b8-8a11-c669d482e979", 00:15:16.950 "aliases": [ 00:15:16.950 "lvs/nvme0n1p0" 00:15:16.950 ], 00:15:16.950 "product_name": "Logical Volume", 00:15:16.950 "block_size": 4096, 00:15:16.950 "num_blocks": 26476544, 00:15:16.950 "uuid": "391913ef-e985-48b8-8a11-c669d482e979", 00:15:16.950 "assigned_rate_limits": { 00:15:16.950 "rw_ios_per_sec": 0, 00:15:16.951 "rw_mbytes_per_sec": 0, 00:15:16.951 "r_mbytes_per_sec": 0, 00:15:16.951 "w_mbytes_per_sec": 0 00:15:16.951 }, 00:15:16.951 "claimed": false, 00:15:16.951 "zoned": false, 00:15:16.951 "supported_io_types": { 00:15:16.951 "read": true, 00:15:16.951 "write": true, 00:15:16.951 "unmap": true, 00:15:16.951 "flush": false, 00:15:16.951 "reset": true, 00:15:16.951 "nvme_admin": false, 00:15:16.951 "nvme_io": false, 00:15:16.951 "nvme_io_md": false, 00:15:16.951 "write_zeroes": true, 00:15:16.951 "zcopy": false, 00:15:16.951 "get_zone_info": false, 00:15:16.951 "zone_management": false, 00:15:16.951 "zone_append": false, 00:15:16.951 "compare": false, 00:15:16.951 "compare_and_write": false, 00:15:16.951 "abort": false, 00:15:16.951 "seek_hole": true, 00:15:16.951 "seek_data": true, 00:15:16.951 "copy": false, 00:15:16.951 "nvme_iov_md": false 00:15:16.951 }, 00:15:16.951 "driver_specific": { 00:15:16.951 "lvol": { 00:15:16.951 "lvol_store_uuid": "a70ed175-9f91-48d0-a37a-71a6d9a41f1b", 00:15:16.951 "base_bdev": "nvme0n1", 00:15:16.951 "thin_provision": true, 00:15:16.951 "num_allocated_clusters": 0, 00:15:16.951 "snapshot": false, 00:15:16.951 "clone": false, 00:15:16.951 "esnap_clone": false 00:15:16.951 } 00:15:16.951 } 00:15:16.951 } 00:15:16.951 ]' 00:15:16.951 23:20:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:16.951 23:20:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:15:16.951 23:20:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:16.951 23:20:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:15:16.951 23:20:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:15:16.951 23:20:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:15:16.951 23:20:40 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:15:16.951 23:20:40 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:17.212 23:20:40 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:17.212 23:20:40 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:17.212 23:20:40 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:17.212 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:17.212 23:20:40 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 391913ef-e985-48b8-8a11-c669d482e979 00:15:17.212 23:20:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=391913ef-e985-48b8-8a11-c669d482e979 00:15:17.212 23:20:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:17.212 23:20:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:15:17.212 23:20:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:15:17.212 23:20:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 391913ef-e985-48b8-8a11-c669d482e979 00:15:17.472 23:20:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:17.472 { 00:15:17.472 "name": "391913ef-e985-48b8-8a11-c669d482e979", 00:15:17.472 "aliases": [ 00:15:17.472 "lvs/nvme0n1p0" 00:15:17.472 ], 00:15:17.472 "product_name": "Logical Volume", 00:15:17.473 "block_size": 4096, 00:15:17.473 "num_blocks": 26476544, 00:15:17.473 "uuid": "391913ef-e985-48b8-8a11-c669d482e979", 00:15:17.473 "assigned_rate_limits": { 00:15:17.473 "rw_ios_per_sec": 0, 00:15:17.473 "rw_mbytes_per_sec": 0, 00:15:17.473 "r_mbytes_per_sec": 0, 00:15:17.473 "w_mbytes_per_sec": 0 00:15:17.473 }, 00:15:17.473 "claimed": false, 00:15:17.473 "zoned": false, 00:15:17.473 "supported_io_types": { 00:15:17.473 "read": true, 00:15:17.473 "write": true, 00:15:17.473 "unmap": true, 00:15:17.473 "flush": false, 00:15:17.473 "reset": true, 00:15:17.473 "nvme_admin": false, 00:15:17.473 "nvme_io": false, 00:15:17.473 "nvme_io_md": false, 00:15:17.473 "write_zeroes": true, 00:15:17.473 "zcopy": false, 00:15:17.473 "get_zone_info": false, 00:15:17.473 "zone_management": false, 00:15:17.473 "zone_append": false, 00:15:17.473 "compare": false, 00:15:17.473 "compare_and_write": false, 00:15:17.473 "abort": false, 00:15:17.473 "seek_hole": true, 00:15:17.473 "seek_data": true, 00:15:17.473 "copy": false, 00:15:17.473 "nvme_iov_md": false 00:15:17.473 }, 00:15:17.473 "driver_specific": { 00:15:17.473 "lvol": { 00:15:17.473 "lvol_store_uuid": "a70ed175-9f91-48d0-a37a-71a6d9a41f1b", 00:15:17.473 "base_bdev": "nvme0n1", 00:15:17.473 "thin_provision": true, 00:15:17.473 "num_allocated_clusters": 0, 00:15:17.473 "snapshot": false, 00:15:17.473 "clone": false, 00:15:17.473 "esnap_clone": false 00:15:17.473 } 00:15:17.473 } 00:15:17.473 } 00:15:17.473 ]' 00:15:17.473 23:20:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:17.473 23:20:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:15:17.473 23:20:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:17.473 23:20:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:15:17.473 23:20:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:15:17.473 23:20:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:15:17.473 23:20:41 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:17.473 23:20:41 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:17.473 23:20:41 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 391913ef-e985-48b8-8a11-c669d482e979 -c nvc0n1p0 --l2p_dram_limit 60 00:15:17.732 [2024-11-17 23:20:41.418831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.732 [2024-11-17 23:20:41.418897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:17.732 [2024-11-17 23:20:41.418910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:17.732 [2024-11-17 23:20:41.418919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.732 [2024-11-17 23:20:41.418965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.732 [2024-11-17 23:20:41.418985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:17.732 [2024-11-17 23:20:41.418992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:15:17.732 [2024-11-17 23:20:41.419001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.732 [2024-11-17 23:20:41.419025] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:17.732 [2024-11-17 23:20:41.419220] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:17.732 [2024-11-17 23:20:41.419232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.732 [2024-11-17 23:20:41.419253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:17.732 [2024-11-17 23:20:41.419270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:15:17.732 [2024-11-17 23:20:41.419278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.732 [2024-11-17 23:20:41.419309] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID af1d8291-f4c3-458e-bd8d-468bbebaae35 00:15:17.732 [2024-11-17 23:20:41.420676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.733 [2024-11-17 23:20:41.420808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:17.733 [2024-11-17 23:20:41.420824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:15:17.733 [2024-11-17 23:20:41.420831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.733 [2024-11-17 23:20:41.427590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.733 [2024-11-17 23:20:41.427705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:17.733 [2024-11-17 23:20:41.427721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.622 ms 00:15:17.733 [2024-11-17 23:20:41.427727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.733 [2024-11-17 23:20:41.427815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.733 [2024-11-17 23:20:41.427832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:17.733 [2024-11-17 23:20:41.427841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:15:17.733 [2024-11-17 23:20:41.427847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.733 [2024-11-17 23:20:41.427914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.733 [2024-11-17 23:20:41.427931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:17.733 [2024-11-17 23:20:41.427941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:17.733 [2024-11-17 23:20:41.427947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.733 [2024-11-17 23:20:41.427979] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:17.733 [2024-11-17 23:20:41.429588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.733 [2024-11-17 23:20:41.429614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:17.733 [2024-11-17 23:20:41.429632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.614 ms 00:15:17.733 [2024-11-17 23:20:41.429641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.733 [2024-11-17 23:20:41.429674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.733 [2024-11-17 23:20:41.429683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:17.733 [2024-11-17 23:20:41.429689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:15:17.733 [2024-11-17 23:20:41.429699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.733 [2024-11-17 23:20:41.429725] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:17.733 [2024-11-17 23:20:41.429842] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:17.733 [2024-11-17 23:20:41.429853] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:17.733 [2024-11-17 23:20:41.429864] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:17.733 [2024-11-17 23:20:41.429874] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:17.733 [2024-11-17 23:20:41.429897] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:17.733 [2024-11-17 23:20:41.429904] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:17.733 [2024-11-17 23:20:41.429913] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:17.733 [2024-11-17 23:20:41.429920] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:17.733 [2024-11-17 23:20:41.429927] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:17.733 [2024-11-17 23:20:41.429934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.733 [2024-11-17 23:20:41.429941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:17.733 [2024-11-17 23:20:41.429947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:15:17.733 [2024-11-17 23:20:41.429954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.733 [2024-11-17 23:20:41.430030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.733 [2024-11-17 23:20:41.430043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:17.733 [2024-11-17 23:20:41.430049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:15:17.733 [2024-11-17 23:20:41.430056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.733 [2024-11-17 23:20:41.430162] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:17.733 [2024-11-17 23:20:41.430173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:17.733 [2024-11-17 23:20:41.430180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:17.733 [2024-11-17 23:20:41.430206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:17.733 [2024-11-17 23:20:41.430212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:17.733 [2024-11-17 23:20:41.430219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:17.733 [2024-11-17 23:20:41.430224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:17.733 [2024-11-17 23:20:41.430232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:17.733 [2024-11-17 23:20:41.430237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:17.733 [2024-11-17 23:20:41.430245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:17.733 [2024-11-17 23:20:41.430250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:17.733 [2024-11-17 23:20:41.430260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:17.733 [2024-11-17 23:20:41.430265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:17.733 [2024-11-17 23:20:41.430274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:17.733 [2024-11-17 23:20:41.430280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:17.733 [2024-11-17 23:20:41.430286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:17.733 [2024-11-17 23:20:41.430292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:17.733 [2024-11-17 23:20:41.430299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:17.733 [2024-11-17 23:20:41.430304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:17.733 [2024-11-17 23:20:41.430311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:17.733 [2024-11-17 23:20:41.430316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:17.733 [2024-11-17 23:20:41.430323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:17.733 [2024-11-17 23:20:41.430333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:17.733 [2024-11-17 23:20:41.430340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:17.733 [2024-11-17 23:20:41.430345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:17.733 [2024-11-17 23:20:41.430352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:17.733 [2024-11-17 23:20:41.430357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:17.733 [2024-11-17 23:20:41.430364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:17.733 [2024-11-17 23:20:41.430369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:17.733 [2024-11-17 23:20:41.430378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:17.733 [2024-11-17 23:20:41.430383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:17.733 [2024-11-17 23:20:41.430389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:17.733 [2024-11-17 23:20:41.430395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:17.733 [2024-11-17 23:20:41.430400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:17.733 [2024-11-17 23:20:41.430405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:17.733 [2024-11-17 23:20:41.430412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:17.733 [2024-11-17 23:20:41.430417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:17.733 [2024-11-17 23:20:41.430424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:17.733 [2024-11-17 23:20:41.430430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:17.733 [2024-11-17 23:20:41.430437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:17.733 [2024-11-17 23:20:41.430442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:17.734 [2024-11-17 23:20:41.430448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:17.734 [2024-11-17 23:20:41.430453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:17.734 [2024-11-17 23:20:41.430460] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:17.734 [2024-11-17 23:20:41.430466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:17.734 [2024-11-17 23:20:41.430474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:17.734 [2024-11-17 23:20:41.430482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:17.734 [2024-11-17 23:20:41.430489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:17.734 [2024-11-17 23:20:41.430494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:17.734 [2024-11-17 23:20:41.430501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:17.734 [2024-11-17 23:20:41.430507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:17.734 [2024-11-17 23:20:41.430513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:17.734 [2024-11-17 23:20:41.430518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:17.734 [2024-11-17 23:20:41.430527] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:17.734 [2024-11-17 23:20:41.430546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:17.734 [2024-11-17 23:20:41.430555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:17.734 [2024-11-17 23:20:41.430560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:17.734 [2024-11-17 23:20:41.430568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:17.734 [2024-11-17 23:20:41.430575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:17.734 [2024-11-17 23:20:41.430584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:17.734 [2024-11-17 23:20:41.430590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:17.734 [2024-11-17 23:20:41.430599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:17.734 [2024-11-17 23:20:41.430604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:17.734 [2024-11-17 23:20:41.430611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:17.734 [2024-11-17 23:20:41.430617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:17.734 [2024-11-17 23:20:41.430624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:17.734 [2024-11-17 23:20:41.430630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:17.734 [2024-11-17 23:20:41.430636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:17.734 [2024-11-17 23:20:41.430642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:17.734 [2024-11-17 23:20:41.430649] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:17.734 [2024-11-17 23:20:41.430655] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:17.734 [2024-11-17 23:20:41.430663] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:17.734 [2024-11-17 23:20:41.430669] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:17.734 [2024-11-17 23:20:41.430675] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:17.734 [2024-11-17 23:20:41.430681] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:17.734 [2024-11-17 23:20:41.430687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.734 [2024-11-17 23:20:41.430693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:17.734 [2024-11-17 23:20:41.430713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.575 ms 00:15:17.734 [2024-11-17 23:20:41.430719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.734 [2024-11-17 23:20:41.430787] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:17.734 [2024-11-17 23:20:41.430796] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:19.635 [2024-11-17 23:20:43.386524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.635 [2024-11-17 23:20:43.386749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:19.635 [2024-11-17 23:20:43.386775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1955.724 ms 00:15:19.635 [2024-11-17 23:20:43.386785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.635 [2024-11-17 23:20:43.397569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.635 [2024-11-17 23:20:43.397728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:19.635 [2024-11-17 23:20:43.397752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.667 ms 00:15:19.635 [2024-11-17 23:20:43.397773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.635 [2024-11-17 23:20:43.397874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.635 [2024-11-17 23:20:43.397914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:19.635 [2024-11-17 23:20:43.397925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:15:19.635 [2024-11-17 23:20:43.397933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.635 [2024-11-17 23:20:43.418693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.635 [2024-11-17 23:20:43.418739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:19.635 [2024-11-17 23:20:43.418754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.701 ms 00:15:19.636 [2024-11-17 23:20:43.418762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.636 [2024-11-17 23:20:43.418803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.636 [2024-11-17 23:20:43.418814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:19.636 [2024-11-17 23:20:43.418824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:19.636 [2024-11-17 23:20:43.418832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.636 [2024-11-17 23:20:43.419318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.636 [2024-11-17 23:20:43.419345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:19.636 [2024-11-17 23:20:43.419360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.393 ms 00:15:19.636 [2024-11-17 23:20:43.419369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.636 [2024-11-17 23:20:43.419501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.636 [2024-11-17 23:20:43.419512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:19.636 [2024-11-17 23:20:43.419528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:15:19.636 [2024-11-17 23:20:43.419537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.636 [2024-11-17 23:20:43.426720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.636 [2024-11-17 23:20:43.426756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:19.636 [2024-11-17 23:20:43.426770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.150 ms 00:15:19.636 [2024-11-17 23:20:43.426784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.636 [2024-11-17 23:20:43.437139] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:19.636 [2024-11-17 23:20:43.452579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.636 [2024-11-17 23:20:43.452618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:19.636 [2024-11-17 23:20:43.452635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.667 ms 00:15:19.636 [2024-11-17 23:20:43.452643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.974 [2024-11-17 23:20:43.481792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.974 [2024-11-17 23:20:43.481834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:19.974 [2024-11-17 23:20:43.481843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.100 ms 00:15:19.974 [2024-11-17 23:20:43.481853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.974 [2024-11-17 23:20:43.482021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.974 [2024-11-17 23:20:43.482032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:19.975 [2024-11-17 23:20:43.482040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:15:19.975 [2024-11-17 23:20:43.482049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.975 [2024-11-17 23:20:43.484632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.975 [2024-11-17 23:20:43.484664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:19.975 [2024-11-17 23:20:43.484672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.549 ms 00:15:19.975 [2024-11-17 23:20:43.484681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.975 [2024-11-17 23:20:43.486772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.975 [2024-11-17 23:20:43.486800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:19.975 [2024-11-17 23:20:43.486808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.052 ms 00:15:19.975 [2024-11-17 23:20:43.486815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.975 [2024-11-17 23:20:43.487113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.975 [2024-11-17 23:20:43.487124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:19.975 [2024-11-17 23:20:43.487131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:15:19.975 [2024-11-17 23:20:43.487149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.975 [2024-11-17 23:20:43.511195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.975 [2024-11-17 23:20:43.511251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:19.975 [2024-11-17 23:20:43.511267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.023 ms 00:15:19.975 [2024-11-17 23:20:43.511280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.975 [2024-11-17 23:20:43.515475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.975 [2024-11-17 23:20:43.515514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:19.975 [2024-11-17 23:20:43.515525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.122 ms 00:15:19.975 [2024-11-17 23:20:43.515536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.975 [2024-11-17 23:20:43.518396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.975 [2024-11-17 23:20:43.518562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:19.975 [2024-11-17 23:20:43.518579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.820 ms 00:15:19.975 [2024-11-17 23:20:43.518590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.975 [2024-11-17 23:20:43.521684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.975 [2024-11-17 23:20:43.521714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:19.975 [2024-11-17 23:20:43.521724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.059 ms 00:15:19.975 [2024-11-17 23:20:43.521737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.975 [2024-11-17 23:20:43.521788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.975 [2024-11-17 23:20:43.521801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:19.975 [2024-11-17 23:20:43.521811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:19.975 [2024-11-17 23:20:43.521833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.975 [2024-11-17 23:20:43.522052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.975 [2024-11-17 23:20:43.522090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:19.975 [2024-11-17 23:20:43.522112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:15:19.975 [2024-11-17 23:20:43.522199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.975 [2024-11-17 23:20:43.523501] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2104.192 ms, result 0 00:15:19.975 { 00:15:19.975 "name": "ftl0", 00:15:19.975 "uuid": "af1d8291-f4c3-458e-bd8d-468bbebaae35" 00:15:19.975 } 00:15:19.975 23:20:43 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:19.975 23:20:43 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:15:19.975 23:20:43 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:19.975 23:20:43 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:15:19.975 23:20:43 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:19.975 23:20:43 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:19.975 23:20:43 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:19.975 23:20:43 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:20.254 [ 00:15:20.254 { 00:15:20.254 "name": "ftl0", 00:15:20.254 "aliases": [ 00:15:20.254 "af1d8291-f4c3-458e-bd8d-468bbebaae35" 00:15:20.254 ], 00:15:20.254 "product_name": "FTL disk", 00:15:20.254 "block_size": 4096, 00:15:20.254 "num_blocks": 20971520, 00:15:20.254 "uuid": "af1d8291-f4c3-458e-bd8d-468bbebaae35", 00:15:20.254 "assigned_rate_limits": { 00:15:20.254 "rw_ios_per_sec": 0, 00:15:20.254 "rw_mbytes_per_sec": 0, 00:15:20.254 "r_mbytes_per_sec": 0, 00:15:20.254 "w_mbytes_per_sec": 0 00:15:20.254 }, 00:15:20.254 "claimed": false, 00:15:20.254 "zoned": false, 00:15:20.254 "supported_io_types": { 00:15:20.254 "read": true, 00:15:20.254 "write": true, 00:15:20.254 "unmap": true, 00:15:20.254 "flush": true, 00:15:20.254 "reset": false, 00:15:20.254 "nvme_admin": false, 00:15:20.254 "nvme_io": false, 00:15:20.254 "nvme_io_md": false, 00:15:20.254 "write_zeroes": true, 00:15:20.254 "zcopy": false, 00:15:20.254 "get_zone_info": false, 00:15:20.254 "zone_management": false, 00:15:20.254 "zone_append": false, 00:15:20.254 "compare": false, 00:15:20.254 "compare_and_write": false, 00:15:20.254 "abort": false, 00:15:20.254 "seek_hole": false, 00:15:20.254 "seek_data": false, 00:15:20.254 "copy": false, 00:15:20.254 "nvme_iov_md": false 00:15:20.254 }, 00:15:20.254 "driver_specific": { 00:15:20.254 "ftl": { 00:15:20.254 "base_bdev": "391913ef-e985-48b8-8a11-c669d482e979", 00:15:20.254 "cache": "nvc0n1p0" 00:15:20.254 } 00:15:20.254 } 00:15:20.254 } 00:15:20.254 ] 00:15:20.254 23:20:43 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:15:20.254 23:20:43 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:20.254 23:20:43 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:20.512 23:20:44 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:15:20.512 23:20:44 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:20.773 [2024-11-17 23:20:44.347982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.773 [2024-11-17 23:20:44.348132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:20.773 [2024-11-17 23:20:44.348200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:20.773 [2024-11-17 23:20:44.348226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.773 [2024-11-17 23:20:44.348275] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:20.773 [2024-11-17 23:20:44.348949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.773 [2024-11-17 23:20:44.349065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:20.773 [2024-11-17 23:20:44.349130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.582 ms 00:15:20.773 [2024-11-17 23:20:44.349156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.773 [2024-11-17 23:20:44.349697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.773 [2024-11-17 23:20:44.349777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:20.773 [2024-11-17 23:20:44.349831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.450 ms 00:15:20.773 [2024-11-17 23:20:44.349857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.773 [2024-11-17 23:20:44.353149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.773 [2024-11-17 23:20:44.353240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:20.773 [2024-11-17 23:20:44.353294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.228 ms 00:15:20.773 [2024-11-17 23:20:44.353321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.773 [2024-11-17 23:20:44.359519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.773 [2024-11-17 23:20:44.359558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:20.773 [2024-11-17 23:20:44.359569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.116 ms 00:15:20.773 [2024-11-17 23:20:44.359578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.773 [2024-11-17 23:20:44.361283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.773 [2024-11-17 23:20:44.361325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:20.773 [2024-11-17 23:20:44.361334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.574 ms 00:15:20.773 [2024-11-17 23:20:44.361343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.773 [2024-11-17 23:20:44.365694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.773 [2024-11-17 23:20:44.365731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:20.773 [2024-11-17 23:20:44.365743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.308 ms 00:15:20.773 [2024-11-17 23:20:44.365753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.773 [2024-11-17 23:20:44.365942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.773 [2024-11-17 23:20:44.365955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:20.773 [2024-11-17 23:20:44.365964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:15:20.773 [2024-11-17 23:20:44.365974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.773 [2024-11-17 23:20:44.367434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.773 [2024-11-17 23:20:44.367552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:20.773 [2024-11-17 23:20:44.367566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.433 ms 00:15:20.773 [2024-11-17 23:20:44.367576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.773 [2024-11-17 23:20:44.368990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.773 [2024-11-17 23:20:44.369021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:20.773 [2024-11-17 23:20:44.369030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.369 ms 00:15:20.773 [2024-11-17 23:20:44.369038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.773 [2024-11-17 23:20:44.370258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.773 [2024-11-17 23:20:44.370306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:20.773 [2024-11-17 23:20:44.370317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.009 ms 00:15:20.773 [2024-11-17 23:20:44.370328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.773 [2024-11-17 23:20:44.371358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.773 [2024-11-17 23:20:44.371393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:20.773 [2024-11-17 23:20:44.371402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.945 ms 00:15:20.773 [2024-11-17 23:20:44.371414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.773 [2024-11-17 23:20:44.371450] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:20.773 [2024-11-17 23:20:44.371467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:20.773 [2024-11-17 23:20:44.371477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:20.773 [2024-11-17 23:20:44.371486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:20.773 [2024-11-17 23:20:44.371494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:20.773 [2024-11-17 23:20:44.371507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:20.773 [2024-11-17 23:20:44.371515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:20.773 [2024-11-17 23:20:44.371524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:20.773 [2024-11-17 23:20:44.371532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:20.773 [2024-11-17 23:20:44.371541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:20.773 [2024-11-17 23:20:44.371549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:20.773 [2024-11-17 23:20:44.371559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:20.773 [2024-11-17 23:20:44.371568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:20.773 [2024-11-17 23:20:44.371577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:20.773 [2024-11-17 23:20:44.371584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:20.773 [2024-11-17 23:20:44.371593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:20.773 [2024-11-17 23:20:44.371601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:20.773 [2024-11-17 23:20:44.371609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:20.773 [2024-11-17 23:20:44.371616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:20.773 [2024-11-17 23:20:44.371626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:20.773 [2024-11-17 23:20:44.371661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:20.773 [2024-11-17 23:20:44.371673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:20.773 [2024-11-17 23:20:44.371681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:20.773 [2024-11-17 23:20:44.371692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:20.773 [2024-11-17 23:20:44.371699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:20.773 [2024-11-17 23:20:44.371709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:20.773 [2024-11-17 23:20:44.371716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:20.773 [2024-11-17 23:20:44.371726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:20.773 [2024-11-17 23:20:44.371733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:20.773 [2024-11-17 23:20:44.371743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:20.773 [2024-11-17 23:20:44.371751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.371760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.371767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.371776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.371788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.371798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.371806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.371817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.371824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.371834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.371841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.371851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.371858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.371867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.371874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.371901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.371909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.371920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.371928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.371937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.371944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.371955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.371963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.371974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.371982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.371994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:20.774 [2024-11-17 23:20:44.372399] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:20.774 [2024-11-17 23:20:44.372410] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: af1d8291-f4c3-458e-bd8d-468bbebaae35 00:15:20.774 [2024-11-17 23:20:44.372419] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:20.774 [2024-11-17 23:20:44.372427] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:20.774 [2024-11-17 23:20:44.372436] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:20.774 [2024-11-17 23:20:44.372444] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:20.774 [2024-11-17 23:20:44.372463] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:20.774 [2024-11-17 23:20:44.372471] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:20.774 [2024-11-17 23:20:44.372480] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:20.774 [2024-11-17 23:20:44.372486] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:20.774 [2024-11-17 23:20:44.372495] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:20.774 [2024-11-17 23:20:44.372502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.774 [2024-11-17 23:20:44.372511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:20.775 [2024-11-17 23:20:44.372519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.052 ms 00:15:20.775 [2024-11-17 23:20:44.372528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.775 [2024-11-17 23:20:44.374466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.775 [2024-11-17 23:20:44.374598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:20.775 [2024-11-17 23:20:44.374624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.906 ms 00:15:20.775 [2024-11-17 23:20:44.374634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.775 [2024-11-17 23:20:44.374759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.775 [2024-11-17 23:20:44.374771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:20.775 [2024-11-17 23:20:44.374780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:15:20.775 [2024-11-17 23:20:44.374791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.775 [2024-11-17 23:20:44.381362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:20.775 [2024-11-17 23:20:44.381485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:20.775 [2024-11-17 23:20:44.381500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:20.775 [2024-11-17 23:20:44.381510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.775 [2024-11-17 23:20:44.381573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:20.775 [2024-11-17 23:20:44.381585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:20.775 [2024-11-17 23:20:44.381593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:20.775 [2024-11-17 23:20:44.381605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.775 [2024-11-17 23:20:44.381691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:20.775 [2024-11-17 23:20:44.381707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:20.775 [2024-11-17 23:20:44.381715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:20.775 [2024-11-17 23:20:44.381725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.775 [2024-11-17 23:20:44.381758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:20.775 [2024-11-17 23:20:44.381768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:20.775 [2024-11-17 23:20:44.381776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:20.775 [2024-11-17 23:20:44.381786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.775 [2024-11-17 23:20:44.394090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:20.775 [2024-11-17 23:20:44.394130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:20.775 [2024-11-17 23:20:44.394141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:20.775 [2024-11-17 23:20:44.394151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.775 [2024-11-17 23:20:44.404033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:20.775 [2024-11-17 23:20:44.404207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:20.775 [2024-11-17 23:20:44.404234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:20.775 [2024-11-17 23:20:44.404248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.775 [2024-11-17 23:20:44.404339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:20.775 [2024-11-17 23:20:44.404355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:20.775 [2024-11-17 23:20:44.404364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:20.775 [2024-11-17 23:20:44.404373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.775 [2024-11-17 23:20:44.404443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:20.775 [2024-11-17 23:20:44.404455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:20.775 [2024-11-17 23:20:44.404463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:20.775 [2024-11-17 23:20:44.404484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.775 [2024-11-17 23:20:44.404578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:20.775 [2024-11-17 23:20:44.404590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:20.775 [2024-11-17 23:20:44.404598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:20.775 [2024-11-17 23:20:44.404608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.775 [2024-11-17 23:20:44.404662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:20.775 [2024-11-17 23:20:44.404674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:20.775 [2024-11-17 23:20:44.404682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:20.775 [2024-11-17 23:20:44.404701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.775 [2024-11-17 23:20:44.404758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:20.775 [2024-11-17 23:20:44.404772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:20.775 [2024-11-17 23:20:44.404781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:20.775 [2024-11-17 23:20:44.404791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.775 [2024-11-17 23:20:44.404845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:20.775 [2024-11-17 23:20:44.404858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:20.775 [2024-11-17 23:20:44.404866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:20.775 [2024-11-17 23:20:44.404972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.775 [2024-11-17 23:20:44.405164] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 57.151 ms, result 0 00:15:20.775 true 00:15:20.775 23:20:44 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 83562 00:15:20.775 23:20:44 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 83562 ']' 00:15:20.775 23:20:44 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 83562 00:15:20.775 23:20:44 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:15:20.775 23:20:44 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:20.775 23:20:44 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83562 00:15:20.775 23:20:44 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:20.775 23:20:44 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:20.775 23:20:44 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83562' 00:15:20.775 killing process with pid 83562 00:15:20.775 23:20:44 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 83562 00:15:20.775 23:20:44 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 83562 00:15:24.962 23:20:48 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:24.962 23:20:48 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:24.962 23:20:48 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:24.962 23:20:48 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:24.962 23:20:48 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:25.228 23:20:48 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:25.229 23:20:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:25.229 23:20:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:25.229 23:20:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:25.229 23:20:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:25.229 23:20:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:25.229 23:20:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:15:25.229 23:20:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:25.229 23:20:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:25.229 23:20:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:25.229 23:20:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:25.229 23:20:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:15:25.229 23:20:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:25.229 23:20:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:25.229 23:20:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:15:25.229 23:20:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:25.229 23:20:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:25.229 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:25.229 fio-3.35 00:15:25.229 Starting 1 thread 00:15:29.423 00:15:29.423 test: (groupid=0, jobs=1): err= 0: pid=83725: Sun Nov 17 23:20:52 2024 00:15:29.423 read: IOPS=1180, BW=78.4MiB/s (82.2MB/s)(255MiB/3248msec) 00:15:29.423 slat (nsec): min=3971, max=19197, avg=5200.43, stdev=1641.40 00:15:29.423 clat (usec): min=257, max=813, avg=377.97, stdev=77.51 00:15:29.423 lat (usec): min=261, max=818, avg=383.17, stdev=77.82 00:15:29.423 clat percentiles (usec): 00:15:29.423 | 1.00th=[ 281], 5.00th=[ 297], 10.00th=[ 314], 20.00th=[ 318], 00:15:29.423 | 30.00th=[ 322], 40.00th=[ 330], 50.00th=[ 343], 60.00th=[ 383], 00:15:29.423 | 70.00th=[ 400], 80.00th=[ 445], 90.00th=[ 486], 95.00th=[ 529], 00:15:29.423 | 99.00th=[ 627], 99.50th=[ 685], 99.90th=[ 799], 99.95th=[ 807], 00:15:29.423 | 99.99th=[ 816] 00:15:29.423 write: IOPS=1188, BW=78.9MiB/s (82.8MB/s)(256MiB/3244msec); 0 zone resets 00:15:29.423 slat (usec): min=14, max=100, avg=23.80, stdev= 4.07 00:15:29.423 clat (usec): min=273, max=1300, avg=423.19, stdev=102.08 00:15:29.423 lat (usec): min=297, max=1325, avg=446.99, stdev=102.45 00:15:29.423 clat percentiles (usec): 00:15:29.423 | 1.00th=[ 306], 5.00th=[ 330], 10.00th=[ 338], 20.00th=[ 343], 00:15:29.423 | 30.00th=[ 351], 40.00th=[ 363], 50.00th=[ 404], 60.00th=[ 429], 00:15:29.423 | 70.00th=[ 469], 80.00th=[ 482], 90.00th=[ 537], 95.00th=[ 603], 00:15:29.423 | 99.00th=[ 816], 99.50th=[ 889], 99.90th=[ 1172], 99.95th=[ 1205], 00:15:29.423 | 99.99th=[ 1303] 00:15:29.423 bw ( KiB/s): min=75072, max=83368, per=99.69%, avg=80580.00, stdev=2990.45, samples=6 00:15:29.423 iops : min= 1104, max= 1226, avg=1185.00, stdev=43.98, samples=6 00:15:29.423 lat (usec) : 500=89.08%, 750=9.95%, 1000=0.86% 00:15:29.423 lat (msec) : 2=0.12% 00:15:29.423 cpu : usr=99.29%, sys=0.03%, ctx=6, majf=0, minf=1326 00:15:29.423 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:29.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.423 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.423 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:29.423 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:29.423 00:15:29.423 Run status group 0 (all jobs): 00:15:29.423 READ: bw=78.4MiB/s (82.2MB/s), 78.4MiB/s-78.4MiB/s (82.2MB/s-82.2MB/s), io=255MiB (267MB), run=3248-3248msec 00:15:29.423 WRITE: bw=78.9MiB/s (82.8MB/s), 78.9MiB/s-78.9MiB/s (82.8MB/s-82.8MB/s), io=256MiB (269MB), run=3244-3244msec 00:15:29.684 ----------------------------------------------------- 00:15:29.684 Suppressions used: 00:15:29.684 count bytes template 00:15:29.684 1 5 /usr/src/fio/parse.c 00:15:29.684 1 8 libtcmalloc_minimal.so 00:15:29.684 1 904 libcrypto.so 00:15:29.684 ----------------------------------------------------- 00:15:29.684 00:15:29.684 23:20:53 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:29.684 23:20:53 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:29.684 23:20:53 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:29.945 23:20:53 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:29.945 23:20:53 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:29.945 23:20:53 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:29.945 23:20:53 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:29.945 23:20:53 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:29.945 23:20:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:29.945 23:20:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:29.945 23:20:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:29.945 23:20:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:29.945 23:20:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:29.945 23:20:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:15:29.945 23:20:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:29.945 23:20:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:29.945 23:20:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:29.945 23:20:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:29.945 23:20:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:15:29.945 23:20:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:29.945 23:20:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:29.945 23:20:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:15:29.945 23:20:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:29.945 23:20:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:29.945 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:29.945 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:29.945 fio-3.35 00:15:29.945 Starting 2 threads 00:15:56.524 00:15:56.524 first_half: (groupid=0, jobs=1): err= 0: pid=83806: Sun Nov 17 23:21:17 2024 00:15:56.524 read: IOPS=2829, BW=11.1MiB/s (11.6MB/s)(256MiB/23134msec) 00:15:56.524 slat (nsec): min=2993, max=53273, avg=4935.80, stdev=1218.17 00:15:56.524 clat (usec): min=452, max=289438, avg=37877.75, stdev=28252.82 00:15:56.524 lat (usec): min=455, max=289444, avg=37882.69, stdev=28253.00 00:15:56.524 clat percentiles (msec): 00:15:56.524 | 1.00th=[ 8], 5.00th=[ 27], 10.00th=[ 30], 20.00th=[ 30], 00:15:56.524 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:15:56.524 | 70.00th=[ 34], 80.00th=[ 37], 90.00th=[ 41], 95.00th=[ 83], 00:15:56.524 | 99.00th=[ 192], 99.50th=[ 205], 99.90th=[ 243], 99.95th=[ 255], 00:15:56.524 | 99.99th=[ 284] 00:15:56.524 write: IOPS=2836, BW=11.1MiB/s (11.6MB/s)(256MiB/23107msec); 0 zone resets 00:15:56.524 slat (usec): min=3, max=492, avg= 6.52, stdev= 4.20 00:15:56.524 clat (usec): min=371, max=57115, avg=7316.64, stdev=7617.80 00:15:56.524 lat (usec): min=377, max=57121, avg=7323.16, stdev=7617.89 00:15:56.524 clat percentiles (usec): 00:15:56.524 | 1.00th=[ 734], 5.00th=[ 1090], 10.00th=[ 1532], 20.00th=[ 2704], 00:15:56.524 | 30.00th=[ 3785], 40.00th=[ 4752], 50.00th=[ 5407], 60.00th=[ 6259], 00:15:56.524 | 70.00th=[ 7177], 80.00th=[ 9110], 90.00th=[12911], 95.00th=[25822], 00:15:56.524 | 99.00th=[40633], 99.50th=[45351], 99.90th=[53740], 99.95th=[55313], 00:15:56.524 | 99.99th=[56361] 00:15:56.524 bw ( KiB/s): min= 16, max=56328, per=91.80%, avg=20829.12, stdev=16230.00, samples=25 00:15:56.524 iops : min= 4, max=14082, avg=5207.28, stdev=4057.50, samples=25 00:15:56.524 lat (usec) : 500=0.05%, 750=0.56%, 1000=1.50% 00:15:56.524 lat (msec) : 2=5.30%, 4=8.97%, 10=25.87%, 20=6.14%, 50=47.72% 00:15:56.524 lat (msec) : 100=1.82%, 250=2.03%, 500=0.03% 00:15:56.524 cpu : usr=99.33%, sys=0.12%, ctx=66, majf=0, minf=5585 00:15:56.524 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:56.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:56.524 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:56.524 issued rwts: total=65465,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:56.524 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:56.524 second_half: (groupid=0, jobs=1): err= 0: pid=83807: Sun Nov 17 23:21:17 2024 00:15:56.524 read: IOPS=2850, BW=11.1MiB/s (11.7MB/s)(256MiB/22976msec) 00:15:56.524 slat (usec): min=3, max=481, avg= 5.28, stdev= 2.79 00:15:56.524 clat (msec): min=10, max=263, avg=38.27, stdev=25.28 00:15:56.524 lat (msec): min=10, max=263, avg=38.28, stdev=25.28 00:15:56.524 clat percentiles (msec): 00:15:56.524 | 1.00th=[ 27], 5.00th=[ 29], 10.00th=[ 30], 20.00th=[ 30], 00:15:56.524 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:15:56.524 | 70.00th=[ 35], 80.00th=[ 38], 90.00th=[ 44], 95.00th=[ 79], 00:15:56.524 | 99.00th=[ 174], 99.50th=[ 188], 99.90th=[ 232], 99.95th=[ 245], 00:15:56.524 | 99.99th=[ 257] 00:15:56.524 write: IOPS=2868, BW=11.2MiB/s (11.7MB/s)(256MiB/22846msec); 0 zone resets 00:15:56.524 slat (usec): min=3, max=1324, avg= 6.55, stdev= 7.84 00:15:56.524 clat (usec): min=388, max=51687, avg=6608.01, stdev=4396.29 00:15:56.524 lat (usec): min=395, max=51693, avg=6614.56, stdev=4396.31 00:15:56.524 clat percentiles (usec): 00:15:56.524 | 1.00th=[ 799], 5.00th=[ 1991], 10.00th=[ 2900], 20.00th=[ 3621], 00:15:56.524 | 30.00th=[ 4359], 40.00th=[ 5145], 50.00th=[ 5735], 60.00th=[ 6521], 00:15:56.524 | 70.00th=[ 7177], 80.00th=[ 8717], 90.00th=[11600], 95.00th=[13173], 00:15:56.524 | 99.00th=[27919], 99.50th=[30540], 99.90th=[44827], 99.95th=[49021], 00:15:56.524 | 99.99th=[50594] 00:15:56.524 bw ( KiB/s): min= 536, max=46120, per=99.76%, avg=22636.52, stdev=13870.28, samples=23 00:15:56.524 iops : min= 134, max=11530, avg=5659.13, stdev=3467.57, samples=23 00:15:56.524 lat (usec) : 500=0.02%, 750=0.31%, 1000=0.53% 00:15:56.524 lat (msec) : 2=1.68%, 4=10.45%, 10=29.17%, 20=7.09%, 50=46.55% 00:15:56.524 lat (msec) : 100=2.39%, 250=1.82%, 500=0.01% 00:15:56.524 cpu : usr=98.62%, sys=0.36%, ctx=66, majf=0, minf=5551 00:15:56.524 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:56.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:56.524 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:56.524 issued rwts: total=65490,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:56.524 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:56.524 00:15:56.524 Run status group 0 (all jobs): 00:15:56.524 READ: bw=22.1MiB/s (23.2MB/s), 11.1MiB/s-11.1MiB/s (11.6MB/s-11.7MB/s), io=512MiB (536MB), run=22976-23134msec 00:15:56.524 WRITE: bw=22.2MiB/s (23.2MB/s), 11.1MiB/s-11.2MiB/s (11.6MB/s-11.7MB/s), io=512MiB (537MB), run=22846-23107msec 00:15:56.524 ----------------------------------------------------- 00:15:56.524 Suppressions used: 00:15:56.524 count bytes template 00:15:56.524 2 10 /usr/src/fio/parse.c 00:15:56.524 3 288 /usr/src/fio/iolog.c 00:15:56.524 1 8 libtcmalloc_minimal.so 00:15:56.524 1 904 libcrypto.so 00:15:56.524 ----------------------------------------------------- 00:15:56.524 00:15:56.524 23:21:19 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:15:56.524 23:21:19 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:56.524 23:21:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:56.524 23:21:19 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:56.524 23:21:19 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:15:56.524 23:21:19 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:56.524 23:21:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:56.524 23:21:19 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:56.524 23:21:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:56.524 23:21:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:56.524 23:21:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:56.524 23:21:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:56.524 23:21:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:56.524 23:21:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:15:56.524 23:21:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:56.524 23:21:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:56.524 23:21:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:56.524 23:21:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:15:56.524 23:21:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:56.524 23:21:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:56.524 23:21:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:56.524 23:21:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:15:56.524 23:21:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:56.524 23:21:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:56.524 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:56.524 fio-3.35 00:15:56.524 Starting 1 thread 00:16:18.491 00:16:18.491 test: (groupid=0, jobs=1): err= 0: pid=84108: Sun Nov 17 23:21:38 2024 00:16:18.491 read: IOPS=5774, BW=22.6MiB/s (23.7MB/s)(255MiB/11292msec) 00:16:18.491 slat (usec): min=3, max=1711, avg= 7.09, stdev= 8.82 00:16:18.491 clat (usec): min=1596, max=46464, avg=22154.72, stdev=3508.61 00:16:18.491 lat (usec): min=1610, max=46468, avg=22161.81, stdev=3508.51 00:16:18.491 clat percentiles (usec): 00:16:18.491 | 1.00th=[15664], 5.00th=[17433], 10.00th=[18220], 20.00th=[19268], 00:16:18.491 | 30.00th=[20317], 40.00th=[21103], 50.00th=[21890], 60.00th=[22676], 00:16:18.491 | 70.00th=[23462], 80.00th=[24511], 90.00th=[26608], 95.00th=[28705], 00:16:18.491 | 99.00th=[32375], 99.50th=[34341], 99.90th=[38011], 99.95th=[40109], 00:16:18.491 | 99.99th=[45876] 00:16:18.491 write: IOPS=9055, BW=35.4MiB/s (37.1MB/s)(256MiB/7237msec); 0 zone resets 00:16:18.491 slat (usec): min=4, max=650, avg= 8.86, stdev= 8.27 00:16:18.491 clat (usec): min=551, max=93617, avg=14060.41, stdev=18504.91 00:16:18.491 lat (usec): min=560, max=93625, avg=14069.28, stdev=18505.17 00:16:18.491 clat percentiles (usec): 00:16:18.491 | 1.00th=[ 1205], 5.00th=[ 1598], 10.00th=[ 1811], 20.00th=[ 2147], 00:16:18.491 | 30.00th=[ 2540], 40.00th=[ 3458], 50.00th=[ 6718], 60.00th=[ 8586], 00:16:18.491 | 70.00th=[11600], 80.00th=[16712], 90.00th=[53740], 95.00th=[58459], 00:16:18.491 | 99.00th=[63177], 99.50th=[64226], 99.90th=[68682], 99.95th=[74974], 00:16:18.491 | 99.99th=[85459] 00:16:18.491 bw ( KiB/s): min=12280, max=65672, per=96.49%, avg=34952.53, stdev=13235.67, samples=15 00:16:18.491 iops : min= 3070, max=16418, avg=8738.13, stdev=3308.92, samples=15 00:16:18.491 lat (usec) : 750=0.01%, 1000=0.14% 00:16:18.491 lat (msec) : 2=7.79%, 4=12.68%, 10=12.17%, 20=22.80%, 50=38.21% 00:16:18.491 lat (msec) : 100=6.19% 00:16:18.491 cpu : usr=97.46%, sys=0.57%, ctx=81, majf=0, minf=5577 00:16:18.491 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:18.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.491 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:18.491 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:18.491 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:18.491 00:16:18.491 Run status group 0 (all jobs): 00:16:18.491 READ: bw=22.6MiB/s (23.7MB/s), 22.6MiB/s-22.6MiB/s (23.7MB/s-23.7MB/s), io=255MiB (267MB), run=11292-11292msec 00:16:18.491 WRITE: bw=35.4MiB/s (37.1MB/s), 35.4MiB/s-35.4MiB/s (37.1MB/s-37.1MB/s), io=256MiB (268MB), run=7237-7237msec 00:16:18.491 ----------------------------------------------------- 00:16:18.491 Suppressions used: 00:16:18.491 count bytes template 00:16:18.491 1 5 /usr/src/fio/parse.c 00:16:18.491 2 192 /usr/src/fio/iolog.c 00:16:18.491 1 8 libtcmalloc_minimal.so 00:16:18.491 1 904 libcrypto.so 00:16:18.491 ----------------------------------------------------- 00:16:18.491 00:16:18.491 23:21:39 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:18.491 23:21:39 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:18.491 23:21:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:18.491 23:21:39 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:18.491 Remove shared memory files 00:16:18.491 23:21:39 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:16:18.491 23:21:39 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:18.491 23:21:39 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:16:18.491 23:21:39 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:16:18.491 23:21:39 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid69113 /dev/shm/spdk_tgt_trace.pid82509 00:16:18.491 23:21:39 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:18.491 23:21:39 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:16:18.491 ************************************ 00:16:18.491 END TEST ftl_fio_basic 00:16:18.491 ************************************ 00:16:18.491 00:16:18.491 real 1m2.159s 00:16:18.491 user 2m15.899s 00:16:18.491 sys 0m2.864s 00:16:18.491 23:21:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:18.491 23:21:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:18.491 23:21:39 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:18.491 23:21:39 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:18.491 23:21:39 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:18.491 23:21:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:18.491 ************************************ 00:16:18.491 START TEST ftl_bdevperf 00:16:18.491 ************************************ 00:16:18.491 23:21:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:18.491 * Looking for test storage... 00:16:18.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:18.491 23:21:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:18.491 23:21:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:16:18.491 23:21:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:18.491 23:21:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:18.491 23:21:40 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:18.491 23:21:40 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:18.491 23:21:40 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:18.491 23:21:40 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:16:18.491 23:21:40 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:16:18.491 23:21:40 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:16:18.491 23:21:40 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:16:18.491 23:21:40 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:16:18.491 23:21:40 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:16:18.491 23:21:40 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:16:18.491 23:21:40 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:18.491 23:21:40 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:16:18.491 23:21:40 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:16:18.491 23:21:40 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:18.491 23:21:40 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:18.491 23:21:40 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:16:18.491 23:21:40 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:16:18.491 23:21:40 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:18.491 23:21:40 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:16:18.491 23:21:40 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:18.491 23:21:40 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:16:18.491 23:21:40 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:16:18.491 23:21:40 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:18.491 23:21:40 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:16:18.491 23:21:40 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:18.491 23:21:40 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:18.491 23:21:40 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:18.491 23:21:40 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:16:18.491 23:21:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:18.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.492 --rc genhtml_branch_coverage=1 00:16:18.492 --rc genhtml_function_coverage=1 00:16:18.492 --rc genhtml_legend=1 00:16:18.492 --rc geninfo_all_blocks=1 00:16:18.492 --rc geninfo_unexecuted_blocks=1 00:16:18.492 00:16:18.492 ' 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:18.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.492 --rc genhtml_branch_coverage=1 00:16:18.492 --rc genhtml_function_coverage=1 00:16:18.492 --rc genhtml_legend=1 00:16:18.492 --rc geninfo_all_blocks=1 00:16:18.492 --rc geninfo_unexecuted_blocks=1 00:16:18.492 00:16:18.492 ' 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:18.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.492 --rc genhtml_branch_coverage=1 00:16:18.492 --rc genhtml_function_coverage=1 00:16:18.492 --rc genhtml_legend=1 00:16:18.492 --rc geninfo_all_blocks=1 00:16:18.492 --rc geninfo_unexecuted_blocks=1 00:16:18.492 00:16:18.492 ' 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:18.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.492 --rc genhtml_branch_coverage=1 00:16:18.492 --rc genhtml_function_coverage=1 00:16:18.492 --rc genhtml_legend=1 00:16:18.492 --rc geninfo_all_blocks=1 00:16:18.492 --rc geninfo_unexecuted_blocks=1 00:16:18.492 00:16:18.492 ' 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=84390 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:18.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 84390 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 84390 ']' 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:18.492 23:21:40 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:18.492 [2024-11-17 23:21:40.225138] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:16:18.492 [2024-11-17 23:21:40.225283] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84390 ] 00:16:18.492 [2024-11-17 23:21:40.371914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.492 [2024-11-17 23:21:40.401241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.492 23:21:41 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:18.492 23:21:41 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:16:18.492 23:21:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:18.492 23:21:41 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:16:18.492 23:21:41 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:18.492 23:21:41 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:16:18.492 23:21:41 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:16:18.492 23:21:41 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:18.492 23:21:41 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:18.492 23:21:41 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:16:18.492 23:21:41 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:18.492 23:21:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:16:18.492 23:21:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:18.492 23:21:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:16:18.492 23:21:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:16:18.492 23:21:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:18.492 23:21:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:18.492 { 00:16:18.492 "name": "nvme0n1", 00:16:18.492 "aliases": [ 00:16:18.492 "ffb15d65-2446-4336-a000-a8c77b9f6370" 00:16:18.492 ], 00:16:18.492 "product_name": "NVMe disk", 00:16:18.492 "block_size": 4096, 00:16:18.492 "num_blocks": 1310720, 00:16:18.492 "uuid": "ffb15d65-2446-4336-a000-a8c77b9f6370", 00:16:18.492 "numa_id": -1, 00:16:18.492 "assigned_rate_limits": { 00:16:18.492 "rw_ios_per_sec": 0, 00:16:18.492 "rw_mbytes_per_sec": 0, 00:16:18.492 "r_mbytes_per_sec": 0, 00:16:18.492 "w_mbytes_per_sec": 0 00:16:18.492 }, 00:16:18.492 "claimed": true, 00:16:18.492 "claim_type": "read_many_write_one", 00:16:18.492 "zoned": false, 00:16:18.492 "supported_io_types": { 00:16:18.492 "read": true, 00:16:18.492 "write": true, 00:16:18.492 "unmap": true, 00:16:18.492 "flush": true, 00:16:18.492 "reset": true, 00:16:18.492 "nvme_admin": true, 00:16:18.492 "nvme_io": true, 00:16:18.492 "nvme_io_md": false, 00:16:18.492 "write_zeroes": true, 00:16:18.493 "zcopy": false, 00:16:18.493 "get_zone_info": false, 00:16:18.493 "zone_management": false, 00:16:18.493 "zone_append": false, 00:16:18.493 "compare": true, 00:16:18.493 "compare_and_write": false, 00:16:18.493 "abort": true, 00:16:18.493 "seek_hole": false, 00:16:18.493 "seek_data": false, 00:16:18.493 "copy": true, 00:16:18.493 "nvme_iov_md": false 00:16:18.493 }, 00:16:18.493 "driver_specific": { 00:16:18.493 "nvme": [ 00:16:18.493 { 00:16:18.493 "pci_address": "0000:00:11.0", 00:16:18.493 "trid": { 00:16:18.493 "trtype": "PCIe", 00:16:18.493 "traddr": "0000:00:11.0" 00:16:18.493 }, 00:16:18.493 "ctrlr_data": { 00:16:18.493 "cntlid": 0, 00:16:18.493 "vendor_id": "0x1b36", 00:16:18.493 "model_number": "QEMU NVMe Ctrl", 00:16:18.493 "serial_number": "12341", 00:16:18.493 "firmware_revision": "8.0.0", 00:16:18.493 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:18.493 "oacs": { 00:16:18.493 "security": 0, 00:16:18.493 "format": 1, 00:16:18.493 "firmware": 0, 00:16:18.493 "ns_manage": 1 00:16:18.493 }, 00:16:18.493 "multi_ctrlr": false, 00:16:18.493 "ana_reporting": false 00:16:18.493 }, 00:16:18.493 "vs": { 00:16:18.493 "nvme_version": "1.4" 00:16:18.493 }, 00:16:18.493 "ns_data": { 00:16:18.493 "id": 1, 00:16:18.493 "can_share": false 00:16:18.493 } 00:16:18.493 } 00:16:18.493 ], 00:16:18.493 "mp_policy": "active_passive" 00:16:18.493 } 00:16:18.493 } 00:16:18.493 ]' 00:16:18.493 23:21:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:18.493 23:21:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:16:18.493 23:21:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:18.493 23:21:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:16:18.493 23:21:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:16:18.493 23:21:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:16:18.493 23:21:41 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:16:18.493 23:21:41 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:18.493 23:21:41 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:16:18.493 23:21:41 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:18.493 23:21:41 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:18.493 23:21:41 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=a70ed175-9f91-48d0-a37a-71a6d9a41f1b 00:16:18.493 23:21:41 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:16:18.493 23:21:41 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a70ed175-9f91-48d0-a37a-71a6d9a41f1b 00:16:18.493 23:21:42 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:18.754 23:21:42 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=becfe7ae-61a4-4a4f-bd76-0003737864aa 00:16:18.754 23:21:42 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u becfe7ae-61a4-4a4f-bd76-0003737864aa 00:16:18.754 23:21:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=432f18dc-b119-43a3-8e77-5d7d5a3ba772 00:16:18.754 23:21:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 432f18dc-b119-43a3-8e77-5d7d5a3ba772 00:16:18.754 23:21:42 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:16:18.754 23:21:42 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:18.754 23:21:42 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=432f18dc-b119-43a3-8e77-5d7d5a3ba772 00:16:18.754 23:21:42 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:16:18.754 23:21:42 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 432f18dc-b119-43a3-8e77-5d7d5a3ba772 00:16:18.754 23:21:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=432f18dc-b119-43a3-8e77-5d7d5a3ba772 00:16:19.016 23:21:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:19.016 23:21:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:16:19.016 23:21:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:16:19.016 23:21:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 432f18dc-b119-43a3-8e77-5d7d5a3ba772 00:16:19.016 23:21:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:19.016 { 00:16:19.016 "name": "432f18dc-b119-43a3-8e77-5d7d5a3ba772", 00:16:19.016 "aliases": [ 00:16:19.016 "lvs/nvme0n1p0" 00:16:19.016 ], 00:16:19.016 "product_name": "Logical Volume", 00:16:19.016 "block_size": 4096, 00:16:19.016 "num_blocks": 26476544, 00:16:19.016 "uuid": "432f18dc-b119-43a3-8e77-5d7d5a3ba772", 00:16:19.016 "assigned_rate_limits": { 00:16:19.016 "rw_ios_per_sec": 0, 00:16:19.016 "rw_mbytes_per_sec": 0, 00:16:19.016 "r_mbytes_per_sec": 0, 00:16:19.016 "w_mbytes_per_sec": 0 00:16:19.016 }, 00:16:19.016 "claimed": false, 00:16:19.016 "zoned": false, 00:16:19.016 "supported_io_types": { 00:16:19.016 "read": true, 00:16:19.016 "write": true, 00:16:19.016 "unmap": true, 00:16:19.016 "flush": false, 00:16:19.016 "reset": true, 00:16:19.016 "nvme_admin": false, 00:16:19.016 "nvme_io": false, 00:16:19.016 "nvme_io_md": false, 00:16:19.016 "write_zeroes": true, 00:16:19.016 "zcopy": false, 00:16:19.016 "get_zone_info": false, 00:16:19.016 "zone_management": false, 00:16:19.016 "zone_append": false, 00:16:19.016 "compare": false, 00:16:19.016 "compare_and_write": false, 00:16:19.016 "abort": false, 00:16:19.016 "seek_hole": true, 00:16:19.016 "seek_data": true, 00:16:19.016 "copy": false, 00:16:19.016 "nvme_iov_md": false 00:16:19.016 }, 00:16:19.016 "driver_specific": { 00:16:19.016 "lvol": { 00:16:19.016 "lvol_store_uuid": "becfe7ae-61a4-4a4f-bd76-0003737864aa", 00:16:19.016 "base_bdev": "nvme0n1", 00:16:19.016 "thin_provision": true, 00:16:19.016 "num_allocated_clusters": 0, 00:16:19.016 "snapshot": false, 00:16:19.016 "clone": false, 00:16:19.016 "esnap_clone": false 00:16:19.016 } 00:16:19.016 } 00:16:19.016 } 00:16:19.016 ]' 00:16:19.016 23:21:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:19.016 23:21:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:16:19.016 23:21:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:19.277 23:21:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:19.277 23:21:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:19.277 23:21:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:16:19.277 23:21:42 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:16:19.277 23:21:42 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:16:19.277 23:21:42 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:19.538 23:21:43 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:19.538 23:21:43 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:19.538 23:21:43 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 432f18dc-b119-43a3-8e77-5d7d5a3ba772 00:16:19.538 23:21:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=432f18dc-b119-43a3-8e77-5d7d5a3ba772 00:16:19.538 23:21:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:19.538 23:21:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:16:19.538 23:21:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:16:19.538 23:21:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 432f18dc-b119-43a3-8e77-5d7d5a3ba772 00:16:19.801 23:21:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:19.801 { 00:16:19.801 "name": "432f18dc-b119-43a3-8e77-5d7d5a3ba772", 00:16:19.801 "aliases": [ 00:16:19.801 "lvs/nvme0n1p0" 00:16:19.801 ], 00:16:19.801 "product_name": "Logical Volume", 00:16:19.801 "block_size": 4096, 00:16:19.801 "num_blocks": 26476544, 00:16:19.801 "uuid": "432f18dc-b119-43a3-8e77-5d7d5a3ba772", 00:16:19.801 "assigned_rate_limits": { 00:16:19.801 "rw_ios_per_sec": 0, 00:16:19.801 "rw_mbytes_per_sec": 0, 00:16:19.801 "r_mbytes_per_sec": 0, 00:16:19.801 "w_mbytes_per_sec": 0 00:16:19.801 }, 00:16:19.801 "claimed": false, 00:16:19.801 "zoned": false, 00:16:19.801 "supported_io_types": { 00:16:19.801 "read": true, 00:16:19.801 "write": true, 00:16:19.801 "unmap": true, 00:16:19.801 "flush": false, 00:16:19.801 "reset": true, 00:16:19.801 "nvme_admin": false, 00:16:19.801 "nvme_io": false, 00:16:19.801 "nvme_io_md": false, 00:16:19.801 "write_zeroes": true, 00:16:19.801 "zcopy": false, 00:16:19.801 "get_zone_info": false, 00:16:19.801 "zone_management": false, 00:16:19.801 "zone_append": false, 00:16:19.801 "compare": false, 00:16:19.801 "compare_and_write": false, 00:16:19.801 "abort": false, 00:16:19.801 "seek_hole": true, 00:16:19.801 "seek_data": true, 00:16:19.801 "copy": false, 00:16:19.801 "nvme_iov_md": false 00:16:19.801 }, 00:16:19.801 "driver_specific": { 00:16:19.801 "lvol": { 00:16:19.801 "lvol_store_uuid": "becfe7ae-61a4-4a4f-bd76-0003737864aa", 00:16:19.801 "base_bdev": "nvme0n1", 00:16:19.801 "thin_provision": true, 00:16:19.801 "num_allocated_clusters": 0, 00:16:19.801 "snapshot": false, 00:16:19.801 "clone": false, 00:16:19.801 "esnap_clone": false 00:16:19.801 } 00:16:19.801 } 00:16:19.801 } 00:16:19.801 ]' 00:16:19.801 23:21:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:19.801 23:21:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:16:19.801 23:21:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:19.801 23:21:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:19.801 23:21:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:19.801 23:21:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:16:19.801 23:21:43 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:16:19.801 23:21:43 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:20.062 23:21:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:16:20.062 23:21:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 432f18dc-b119-43a3-8e77-5d7d5a3ba772 00:16:20.062 23:21:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=432f18dc-b119-43a3-8e77-5d7d5a3ba772 00:16:20.062 23:21:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:20.062 23:21:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:16:20.062 23:21:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:16:20.062 23:21:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 432f18dc-b119-43a3-8e77-5d7d5a3ba772 00:16:20.062 23:21:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:20.062 { 00:16:20.062 "name": "432f18dc-b119-43a3-8e77-5d7d5a3ba772", 00:16:20.062 "aliases": [ 00:16:20.062 "lvs/nvme0n1p0" 00:16:20.062 ], 00:16:20.062 "product_name": "Logical Volume", 00:16:20.062 "block_size": 4096, 00:16:20.062 "num_blocks": 26476544, 00:16:20.062 "uuid": "432f18dc-b119-43a3-8e77-5d7d5a3ba772", 00:16:20.062 "assigned_rate_limits": { 00:16:20.062 "rw_ios_per_sec": 0, 00:16:20.062 "rw_mbytes_per_sec": 0, 00:16:20.062 "r_mbytes_per_sec": 0, 00:16:20.062 "w_mbytes_per_sec": 0 00:16:20.062 }, 00:16:20.062 "claimed": false, 00:16:20.062 "zoned": false, 00:16:20.062 "supported_io_types": { 00:16:20.062 "read": true, 00:16:20.062 "write": true, 00:16:20.062 "unmap": true, 00:16:20.062 "flush": false, 00:16:20.062 "reset": true, 00:16:20.062 "nvme_admin": false, 00:16:20.062 "nvme_io": false, 00:16:20.062 "nvme_io_md": false, 00:16:20.062 "write_zeroes": true, 00:16:20.062 "zcopy": false, 00:16:20.062 "get_zone_info": false, 00:16:20.062 "zone_management": false, 00:16:20.062 "zone_append": false, 00:16:20.062 "compare": false, 00:16:20.062 "compare_and_write": false, 00:16:20.062 "abort": false, 00:16:20.062 "seek_hole": true, 00:16:20.062 "seek_data": true, 00:16:20.062 "copy": false, 00:16:20.062 "nvme_iov_md": false 00:16:20.062 }, 00:16:20.062 "driver_specific": { 00:16:20.062 "lvol": { 00:16:20.062 "lvol_store_uuid": "becfe7ae-61a4-4a4f-bd76-0003737864aa", 00:16:20.062 "base_bdev": "nvme0n1", 00:16:20.062 "thin_provision": true, 00:16:20.062 "num_allocated_clusters": 0, 00:16:20.062 "snapshot": false, 00:16:20.062 "clone": false, 00:16:20.062 "esnap_clone": false 00:16:20.062 } 00:16:20.062 } 00:16:20.062 } 00:16:20.062 ]' 00:16:20.062 23:21:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:20.324 23:21:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:16:20.324 23:21:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:20.324 23:21:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:20.324 23:21:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:20.324 23:21:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:16:20.324 23:21:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:16:20.324 23:21:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 432f18dc-b119-43a3-8e77-5d7d5a3ba772 -c nvc0n1p0 --l2p_dram_limit 20 00:16:20.324 [2024-11-17 23:21:44.109794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.324 [2024-11-17 23:21:44.109838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:20.324 [2024-11-17 23:21:44.109850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:20.324 [2024-11-17 23:21:44.109856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.324 [2024-11-17 23:21:44.109908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.324 [2024-11-17 23:21:44.109916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:20.324 [2024-11-17 23:21:44.109928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:16:20.324 [2024-11-17 23:21:44.109934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.324 [2024-11-17 23:21:44.109956] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:20.324 [2024-11-17 23:21:44.110384] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:20.324 [2024-11-17 23:21:44.110417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.324 [2024-11-17 23:21:44.110425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:20.324 [2024-11-17 23:21:44.110435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.466 ms 00:16:20.324 [2024-11-17 23:21:44.110441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.324 [2024-11-17 23:21:44.110505] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1c667c59-7557-44fa-b903-a7f282abba09 00:16:20.324 [2024-11-17 23:21:44.111447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.324 [2024-11-17 23:21:44.111470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:20.324 [2024-11-17 23:21:44.111478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:16:20.324 [2024-11-17 23:21:44.111486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.324 [2024-11-17 23:21:44.116242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.324 [2024-11-17 23:21:44.116270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:20.324 [2024-11-17 23:21:44.116278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.718 ms 00:16:20.324 [2024-11-17 23:21:44.116289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.324 [2024-11-17 23:21:44.116342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.324 [2024-11-17 23:21:44.116353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:20.324 [2024-11-17 23:21:44.116359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:16:20.324 [2024-11-17 23:21:44.116369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.324 [2024-11-17 23:21:44.116400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.324 [2024-11-17 23:21:44.116411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:20.324 [2024-11-17 23:21:44.116417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:20.324 [2024-11-17 23:21:44.116426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.324 [2024-11-17 23:21:44.116440] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:20.324 [2024-11-17 23:21:44.117690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.324 [2024-11-17 23:21:44.117721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:20.324 [2024-11-17 23:21:44.117732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.251 ms 00:16:20.324 [2024-11-17 23:21:44.117739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.324 [2024-11-17 23:21:44.117765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.324 [2024-11-17 23:21:44.117771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:20.324 [2024-11-17 23:21:44.117780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:20.324 [2024-11-17 23:21:44.117786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.324 [2024-11-17 23:21:44.117798] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:20.324 [2024-11-17 23:21:44.117914] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:20.324 [2024-11-17 23:21:44.117927] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:20.324 [2024-11-17 23:21:44.117936] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:20.324 [2024-11-17 23:21:44.117945] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:20.324 [2024-11-17 23:21:44.117952] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:20.324 [2024-11-17 23:21:44.117965] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:20.324 [2024-11-17 23:21:44.117971] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:20.324 [2024-11-17 23:21:44.117978] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:20.324 [2024-11-17 23:21:44.117983] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:20.324 [2024-11-17 23:21:44.117992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.325 [2024-11-17 23:21:44.117997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:20.325 [2024-11-17 23:21:44.118004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:16:20.325 [2024-11-17 23:21:44.118010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.325 [2024-11-17 23:21:44.118076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.325 [2024-11-17 23:21:44.118082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:20.325 [2024-11-17 23:21:44.118091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:16:20.325 [2024-11-17 23:21:44.118097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.325 [2024-11-17 23:21:44.118165] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:20.325 [2024-11-17 23:21:44.118175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:20.325 [2024-11-17 23:21:44.118184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:20.325 [2024-11-17 23:21:44.118190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:20.325 [2024-11-17 23:21:44.118198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:20.325 [2024-11-17 23:21:44.118203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:20.325 [2024-11-17 23:21:44.118209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:20.325 [2024-11-17 23:21:44.118214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:20.325 [2024-11-17 23:21:44.118221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:20.325 [2024-11-17 23:21:44.118225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:20.325 [2024-11-17 23:21:44.118232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:20.325 [2024-11-17 23:21:44.118237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:20.325 [2024-11-17 23:21:44.118244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:20.325 [2024-11-17 23:21:44.118249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:20.325 [2024-11-17 23:21:44.118255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:20.325 [2024-11-17 23:21:44.118260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:20.325 [2024-11-17 23:21:44.118268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:20.325 [2024-11-17 23:21:44.118273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:20.325 [2024-11-17 23:21:44.118279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:20.325 [2024-11-17 23:21:44.118284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:20.325 [2024-11-17 23:21:44.118291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:20.325 [2024-11-17 23:21:44.118295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:20.325 [2024-11-17 23:21:44.118302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:20.325 [2024-11-17 23:21:44.118308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:20.325 [2024-11-17 23:21:44.118314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:20.325 [2024-11-17 23:21:44.118319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:20.325 [2024-11-17 23:21:44.118326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:20.325 [2024-11-17 23:21:44.118331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:20.325 [2024-11-17 23:21:44.118339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:20.325 [2024-11-17 23:21:44.118345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:20.325 [2024-11-17 23:21:44.118352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:20.325 [2024-11-17 23:21:44.118357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:20.325 [2024-11-17 23:21:44.118365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:20.325 [2024-11-17 23:21:44.118370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:20.325 [2024-11-17 23:21:44.118378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:20.325 [2024-11-17 23:21:44.118384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:20.325 [2024-11-17 23:21:44.118391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:20.325 [2024-11-17 23:21:44.118397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:20.325 [2024-11-17 23:21:44.118404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:20.325 [2024-11-17 23:21:44.118410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:20.325 [2024-11-17 23:21:44.118417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:20.325 [2024-11-17 23:21:44.118423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:20.325 [2024-11-17 23:21:44.118429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:20.325 [2024-11-17 23:21:44.118435] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:20.325 [2024-11-17 23:21:44.118444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:20.325 [2024-11-17 23:21:44.118451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:20.325 [2024-11-17 23:21:44.118458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:20.325 [2024-11-17 23:21:44.118465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:20.325 [2024-11-17 23:21:44.118473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:20.325 [2024-11-17 23:21:44.118479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:20.325 [2024-11-17 23:21:44.118486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:20.325 [2024-11-17 23:21:44.118492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:20.325 [2024-11-17 23:21:44.118501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:20.325 [2024-11-17 23:21:44.118510] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:20.325 [2024-11-17 23:21:44.118522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:20.325 [2024-11-17 23:21:44.118530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:20.325 [2024-11-17 23:21:44.118538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:20.325 [2024-11-17 23:21:44.118544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:20.325 [2024-11-17 23:21:44.118552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:20.325 [2024-11-17 23:21:44.118558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:20.325 [2024-11-17 23:21:44.118567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:20.325 [2024-11-17 23:21:44.118574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:20.325 [2024-11-17 23:21:44.118581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:20.325 [2024-11-17 23:21:44.118589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:20.325 [2024-11-17 23:21:44.118600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:20.325 [2024-11-17 23:21:44.118607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:20.325 [2024-11-17 23:21:44.118614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:20.325 [2024-11-17 23:21:44.118621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:20.325 [2024-11-17 23:21:44.118629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:20.325 [2024-11-17 23:21:44.118635] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:20.325 [2024-11-17 23:21:44.118644] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:20.325 [2024-11-17 23:21:44.118652] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:20.325 [2024-11-17 23:21:44.118661] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:20.325 [2024-11-17 23:21:44.118668] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:20.325 [2024-11-17 23:21:44.118676] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:20.325 [2024-11-17 23:21:44.118683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.325 [2024-11-17 23:21:44.118692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:20.325 [2024-11-17 23:21:44.118699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:16:20.325 [2024-11-17 23:21:44.118706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.325 [2024-11-17 23:21:44.118730] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:20.325 [2024-11-17 23:21:44.118739] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:24.538 [2024-11-17 23:21:47.972253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.538 [2024-11-17 23:21:47.972307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:24.538 [2024-11-17 23:21:47.972319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3853.508 ms 00:16:24.539 [2024-11-17 23:21:47.972330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.539 [2024-11-17 23:21:47.979696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.539 [2024-11-17 23:21:47.979733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:24.539 [2024-11-17 23:21:47.979746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.301 ms 00:16:24.539 [2024-11-17 23:21:47.979756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.539 [2024-11-17 23:21:47.979822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.539 [2024-11-17 23:21:47.979830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:24.539 [2024-11-17 23:21:47.979837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:16:24.539 [2024-11-17 23:21:47.979847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.539 [2024-11-17 23:21:47.998760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.539 [2024-11-17 23:21:47.998813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:24.539 [2024-11-17 23:21:47.998828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.880 ms 00:16:24.539 [2024-11-17 23:21:47.998841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.539 [2024-11-17 23:21:47.998890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.539 [2024-11-17 23:21:47.998905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:24.539 [2024-11-17 23:21:47.998918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:24.539 [2024-11-17 23:21:47.998929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.539 [2024-11-17 23:21:47.999294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.539 [2024-11-17 23:21:47.999351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:24.539 [2024-11-17 23:21:47.999363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:16:24.539 [2024-11-17 23:21:47.999377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.539 [2024-11-17 23:21:47.999516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.539 [2024-11-17 23:21:47.999537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:24.539 [2024-11-17 23:21:47.999550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:16:24.539 [2024-11-17 23:21:47.999565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.539 [2024-11-17 23:21:48.004876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.539 [2024-11-17 23:21:48.004927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:24.539 [2024-11-17 23:21:48.004936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.291 ms 00:16:24.539 [2024-11-17 23:21:48.004949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.539 [2024-11-17 23:21:48.013289] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:24.539 [2024-11-17 23:21:48.018248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.539 [2024-11-17 23:21:48.018272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:24.539 [2024-11-17 23:21:48.018285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.245 ms 00:16:24.539 [2024-11-17 23:21:48.018292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.539 [2024-11-17 23:21:48.082754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.539 [2024-11-17 23:21:48.082788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:24.539 [2024-11-17 23:21:48.082804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.439 ms 00:16:24.539 [2024-11-17 23:21:48.082811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.539 [2024-11-17 23:21:48.082962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.539 [2024-11-17 23:21:48.082971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:24.539 [2024-11-17 23:21:48.082981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:16:24.539 [2024-11-17 23:21:48.082987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.539 [2024-11-17 23:21:48.086605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.539 [2024-11-17 23:21:48.086635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:24.539 [2024-11-17 23:21:48.086644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.593 ms 00:16:24.539 [2024-11-17 23:21:48.086650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.539 [2024-11-17 23:21:48.089444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.539 [2024-11-17 23:21:48.089471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:24.539 [2024-11-17 23:21:48.089480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.765 ms 00:16:24.539 [2024-11-17 23:21:48.089485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.539 [2024-11-17 23:21:48.089720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.539 [2024-11-17 23:21:48.089729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:24.539 [2024-11-17 23:21:48.089738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:16:24.539 [2024-11-17 23:21:48.089743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.539 [2024-11-17 23:21:48.120513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.539 [2024-11-17 23:21:48.120543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:24.539 [2024-11-17 23:21:48.120557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.754 ms 00:16:24.539 [2024-11-17 23:21:48.120563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.539 [2024-11-17 23:21:48.124836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.539 [2024-11-17 23:21:48.124869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:24.539 [2024-11-17 23:21:48.124887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.237 ms 00:16:24.539 [2024-11-17 23:21:48.124893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.539 [2024-11-17 23:21:48.128191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.539 [2024-11-17 23:21:48.128218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:24.539 [2024-11-17 23:21:48.128226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.266 ms 00:16:24.539 [2024-11-17 23:21:48.128231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.539 [2024-11-17 23:21:48.131939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.539 [2024-11-17 23:21:48.131966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:24.539 [2024-11-17 23:21:48.131976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.680 ms 00:16:24.539 [2024-11-17 23:21:48.131982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.539 [2024-11-17 23:21:48.132017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.539 [2024-11-17 23:21:48.132024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:24.539 [2024-11-17 23:21:48.132034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:24.539 [2024-11-17 23:21:48.132040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.539 [2024-11-17 23:21:48.132089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.539 [2024-11-17 23:21:48.132098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:24.539 [2024-11-17 23:21:48.132107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:16:24.539 [2024-11-17 23:21:48.132112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.539 [2024-11-17 23:21:48.133062] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4022.927 ms, result 0 00:16:24.539 { 00:16:24.539 "name": "ftl0", 00:16:24.539 "uuid": "1c667c59-7557-44fa-b903-a7f282abba09" 00:16:24.539 } 00:16:24.539 23:21:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:24.539 23:21:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:16:24.539 23:21:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:16:24.539 23:21:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:24.798 [2024-11-17 23:21:48.435809] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:24.798 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:24.798 Zero copy mechanism will not be used. 00:16:24.798 Running I/O for 4 seconds... 00:16:26.685 1232.00 IOPS, 81.81 MiB/s [2024-11-17T23:21:51.452Z] 1026.00 IOPS, 68.13 MiB/s [2024-11-17T23:21:52.835Z] 1041.33 IOPS, 69.15 MiB/s [2024-11-17T23:21:52.835Z] 1012.50 IOPS, 67.24 MiB/s 00:16:29.014 Latency(us) 00:16:29.014 [2024-11-17T23:21:52.835Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:29.014 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:16:29.014 ftl0 : 4.00 1012.14 67.21 0.00 0.00 1041.02 164.63 26819.35 00:16:29.014 [2024-11-17T23:21:52.835Z] =================================================================================================================== 00:16:29.014 [2024-11-17T23:21:52.835Z] Total : 1012.14 67.21 0.00 0.00 1041.02 164.63 26819.35 00:16:29.014 [2024-11-17 23:21:52.443790] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:29.014 { 00:16:29.014 "results": [ 00:16:29.014 { 00:16:29.014 "job": "ftl0", 00:16:29.014 "core_mask": "0x1", 00:16:29.014 "workload": "randwrite", 00:16:29.014 "status": "finished", 00:16:29.014 "queue_depth": 1, 00:16:29.014 "io_size": 69632, 00:16:29.014 "runtime": 4.002421, 00:16:29.014 "iops": 1012.1374038363281, 00:16:29.014 "mibps": 67.21224947350616, 00:16:29.014 "io_failed": 0, 00:16:29.014 "io_timeout": 0, 00:16:29.014 "avg_latency_us": 1041.022398268234, 00:16:29.014 "min_latency_us": 164.6276923076923, 00:16:29.014 "max_latency_us": 26819.347692307692 00:16:29.014 } 00:16:29.014 ], 00:16:29.014 "core_count": 1 00:16:29.014 } 00:16:29.014 23:21:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:16:29.014 [2024-11-17 23:21:52.550681] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:29.014 Running I/O for 4 seconds... 00:16:30.896 7812.00 IOPS, 30.52 MiB/s [2024-11-17T23:21:55.660Z] 6938.00 IOPS, 27.10 MiB/s [2024-11-17T23:21:56.601Z] 6708.33 IOPS, 26.20 MiB/s [2024-11-17T23:21:56.601Z] 6308.25 IOPS, 24.64 MiB/s 00:16:32.780 Latency(us) 00:16:32.780 [2024-11-17T23:21:56.601Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:32.780 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:32.780 ftl0 : 4.03 6290.16 24.57 0.00 0.00 20271.75 264.66 48194.17 00:16:32.780 [2024-11-17T23:21:56.601Z] =================================================================================================================== 00:16:32.780 [2024-11-17T23:21:56.601Z] Total : 6290.16 24.57 0.00 0.00 20271.75 0.00 48194.17 00:16:32.780 [2024-11-17 23:21:56.588580] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:32.780 { 00:16:32.780 "results": [ 00:16:32.780 { 00:16:32.780 "job": "ftl0", 00:16:32.780 "core_mask": "0x1", 00:16:32.780 "workload": "randwrite", 00:16:32.780 "status": "finished", 00:16:32.780 "queue_depth": 128, 00:16:32.780 "io_size": 4096, 00:16:32.780 "runtime": 4.031854, 00:16:32.780 "iops": 6290.15832418535, 00:16:32.780 "mibps": 24.57093095384902, 00:16:32.780 "io_failed": 0, 00:16:32.780 "io_timeout": 0, 00:16:32.780 "avg_latency_us": 20271.752615433146, 00:16:32.780 "min_latency_us": 264.6646153846154, 00:16:32.780 "max_latency_us": 48194.166153846156 00:16:32.780 } 00:16:32.780 ], 00:16:32.780 "core_count": 1 00:16:32.780 } 00:16:33.042 23:21:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:33.042 [2024-11-17 23:21:56.700852] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:33.042 Running I/O for 4 seconds... 00:16:34.931 4639.00 IOPS, 18.12 MiB/s [2024-11-17T23:22:00.139Z] 4778.50 IOPS, 18.67 MiB/s [2024-11-17T23:22:01.081Z] 4745.67 IOPS, 18.54 MiB/s [2024-11-17T23:22:01.081Z] 5036.75 IOPS, 19.67 MiB/s 00:16:37.260 Latency(us) 00:16:37.260 [2024-11-17T23:22:01.081Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.260 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:37.260 Verification LBA range: start 0x0 length 0x1400000 00:16:37.260 ftl0 : 4.01 5052.33 19.74 0.00 0.00 25265.26 230.01 39926.55 00:16:37.260 [2024-11-17T23:22:01.081Z] =================================================================================================================== 00:16:37.260 [2024-11-17T23:22:01.081Z] Total : 5052.33 19.74 0.00 0.00 25265.26 0.00 39926.55 00:16:37.260 { 00:16:37.260 "results": [ 00:16:37.260 { 00:16:37.260 "job": "ftl0", 00:16:37.260 "core_mask": "0x1", 00:16:37.260 "workload": "verify", 00:16:37.260 "status": "finished", 00:16:37.260 "verify_range": { 00:16:37.260 "start": 0, 00:16:37.260 "length": 20971520 00:16:37.260 }, 00:16:37.260 "queue_depth": 128, 00:16:37.260 "io_size": 4096, 00:16:37.260 "runtime": 4.010625, 00:16:37.260 "iops": 5052.329749103943, 00:16:37.260 "mibps": 19.735663082437277, 00:16:37.260 "io_failed": 0, 00:16:37.260 "io_timeout": 0, 00:16:37.260 "avg_latency_us": 25265.25739753017, 00:16:37.260 "min_latency_us": 230.00615384615384, 00:16:37.260 "max_latency_us": 39926.54769230769 00:16:37.260 } 00:16:37.260 ], 00:16:37.260 "core_count": 1 00:16:37.260 } 00:16:37.260 [2024-11-17 23:22:00.728977] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:37.260 23:22:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:37.260 [2024-11-17 23:22:00.933365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.260 [2024-11-17 23:22:00.933424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:37.260 [2024-11-17 23:22:00.933441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:37.260 [2024-11-17 23:22:00.933450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.260 [2024-11-17 23:22:00.933476] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:37.260 [2024-11-17 23:22:00.934234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.260 [2024-11-17 23:22:00.934280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:37.260 [2024-11-17 23:22:00.934292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.741 ms 00:16:37.260 [2024-11-17 23:22:00.934309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.260 [2024-11-17 23:22:00.937409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.260 [2024-11-17 23:22:00.937457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:37.260 [2024-11-17 23:22:00.937468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.073 ms 00:16:37.260 [2024-11-17 23:22:00.937482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.523 [2024-11-17 23:22:01.167075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.523 [2024-11-17 23:22:01.167141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:37.523 [2024-11-17 23:22:01.167155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 229.574 ms 00:16:37.523 [2024-11-17 23:22:01.167170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.523 [2024-11-17 23:22:01.173702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.523 [2024-11-17 23:22:01.173755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:37.523 [2024-11-17 23:22:01.173768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.483 ms 00:16:37.523 [2024-11-17 23:22:01.173779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.523 [2024-11-17 23:22:01.176633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.523 [2024-11-17 23:22:01.176685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:37.523 [2024-11-17 23:22:01.176696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.778 ms 00:16:37.523 [2024-11-17 23:22:01.176709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.523 [2024-11-17 23:22:01.183237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.523 [2024-11-17 23:22:01.183308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:37.523 [2024-11-17 23:22:01.183319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.485 ms 00:16:37.523 [2024-11-17 23:22:01.183333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.523 [2024-11-17 23:22:01.183457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.523 [2024-11-17 23:22:01.183470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:37.523 [2024-11-17 23:22:01.183480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:16:37.523 [2024-11-17 23:22:01.183490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.523 [2024-11-17 23:22:01.186761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.523 [2024-11-17 23:22:01.186823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:37.523 [2024-11-17 23:22:01.186838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.248 ms 00:16:37.523 [2024-11-17 23:22:01.186853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.523 [2024-11-17 23:22:01.189926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.523 [2024-11-17 23:22:01.189978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:37.523 [2024-11-17 23:22:01.189987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.016 ms 00:16:37.523 [2024-11-17 23:22:01.189997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.523 [2024-11-17 23:22:01.192351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.523 [2024-11-17 23:22:01.192401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:37.523 [2024-11-17 23:22:01.192410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.311 ms 00:16:37.523 [2024-11-17 23:22:01.192423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.523 [2024-11-17 23:22:01.194614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.523 [2024-11-17 23:22:01.194665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:37.523 [2024-11-17 23:22:01.194675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.129 ms 00:16:37.523 [2024-11-17 23:22:01.194684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.523 [2024-11-17 23:22:01.194723] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:37.523 [2024-11-17 23:22:01.194741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:37.523 [2024-11-17 23:22:01.194754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:37.523 [2024-11-17 23:22:01.194764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:37.523 [2024-11-17 23:22:01.194772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:37.523 [2024-11-17 23:22:01.194783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:37.523 [2024-11-17 23:22:01.194790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:37.523 [2024-11-17 23:22:01.194819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:37.523 [2024-11-17 23:22:01.194827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:37.523 [2024-11-17 23:22:01.194837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:37.523 [2024-11-17 23:22:01.194844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.194856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.194863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.194872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.194896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.194906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.194913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.194922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.194930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.194939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.194947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.194956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.194964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.194973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.194980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.194991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.194998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:37.524 [2024-11-17 23:22:01.195640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:37.525 [2024-11-17 23:22:01.195685] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:37.525 [2024-11-17 23:22:01.195696] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1c667c59-7557-44fa-b903-a7f282abba09 00:16:37.525 [2024-11-17 23:22:01.195708] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:37.525 [2024-11-17 23:22:01.195715] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:37.525 [2024-11-17 23:22:01.195732] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:37.525 [2024-11-17 23:22:01.195740] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:37.525 [2024-11-17 23:22:01.195752] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:37.525 [2024-11-17 23:22:01.195760] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:37.525 [2024-11-17 23:22:01.195770] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:37.525 [2024-11-17 23:22:01.195776] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:37.525 [2024-11-17 23:22:01.195785] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:37.525 [2024-11-17 23:22:01.195795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.525 [2024-11-17 23:22:01.195805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:37.525 [2024-11-17 23:22:01.195817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.073 ms 00:16:37.525 [2024-11-17 23:22:01.195826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.525 [2024-11-17 23:22:01.198175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.525 [2024-11-17 23:22:01.198216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:37.525 [2024-11-17 23:22:01.198233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.328 ms 00:16:37.525 [2024-11-17 23:22:01.198248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.525 [2024-11-17 23:22:01.198371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.525 [2024-11-17 23:22:01.198382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:37.525 [2024-11-17 23:22:01.198391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:16:37.525 [2024-11-17 23:22:01.198410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.525 [2024-11-17 23:22:01.206156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.525 [2024-11-17 23:22:01.206207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:37.525 [2024-11-17 23:22:01.206218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.525 [2024-11-17 23:22:01.206229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.525 [2024-11-17 23:22:01.206300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.525 [2024-11-17 23:22:01.206312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:37.525 [2024-11-17 23:22:01.206320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.525 [2024-11-17 23:22:01.206336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.525 [2024-11-17 23:22:01.206414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.525 [2024-11-17 23:22:01.206427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:37.525 [2024-11-17 23:22:01.206435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.525 [2024-11-17 23:22:01.206445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.525 [2024-11-17 23:22:01.206460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.525 [2024-11-17 23:22:01.206470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:37.525 [2024-11-17 23:22:01.206478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.525 [2024-11-17 23:22:01.206491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.525 [2024-11-17 23:22:01.220914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.525 [2024-11-17 23:22:01.220969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:37.525 [2024-11-17 23:22:01.220980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.525 [2024-11-17 23:22:01.220991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.525 [2024-11-17 23:22:01.233224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.525 [2024-11-17 23:22:01.233281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:37.525 [2024-11-17 23:22:01.233293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.525 [2024-11-17 23:22:01.233314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.525 [2024-11-17 23:22:01.233388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.525 [2024-11-17 23:22:01.233401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:37.525 [2024-11-17 23:22:01.233410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.525 [2024-11-17 23:22:01.233421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.525 [2024-11-17 23:22:01.233463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.525 [2024-11-17 23:22:01.233474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:37.525 [2024-11-17 23:22:01.233483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.525 [2024-11-17 23:22:01.233496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.525 [2024-11-17 23:22:01.233582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.525 [2024-11-17 23:22:01.233599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:37.525 [2024-11-17 23:22:01.233608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.525 [2024-11-17 23:22:01.233618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.525 [2024-11-17 23:22:01.233648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.525 [2024-11-17 23:22:01.233660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:37.525 [2024-11-17 23:22:01.233669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.525 [2024-11-17 23:22:01.233679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.525 [2024-11-17 23:22:01.233724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.525 [2024-11-17 23:22:01.233735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:37.525 [2024-11-17 23:22:01.233745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.525 [2024-11-17 23:22:01.233755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.525 [2024-11-17 23:22:01.233803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.525 [2024-11-17 23:22:01.233816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:37.525 [2024-11-17 23:22:01.233824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.525 [2024-11-17 23:22:01.233836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.525 [2024-11-17 23:22:01.234045] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 300.643 ms, result 0 00:16:37.525 true 00:16:37.525 23:22:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 84390 00:16:37.525 23:22:01 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 84390 ']' 00:16:37.525 23:22:01 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 84390 00:16:37.525 23:22:01 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:16:37.525 23:22:01 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:37.525 23:22:01 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84390 00:16:37.525 killing process with pid 84390 00:16:37.525 Received shutdown signal, test time was about 4.000000 seconds 00:16:37.525 00:16:37.525 Latency(us) 00:16:37.525 [2024-11-17T23:22:01.346Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.525 [2024-11-17T23:22:01.346Z] =================================================================================================================== 00:16:37.525 [2024-11-17T23:22:01.346Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:37.525 23:22:01 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:37.525 23:22:01 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:37.525 23:22:01 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84390' 00:16:37.525 23:22:01 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 84390 00:16:37.525 23:22:01 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 84390 00:16:38.097 Remove shared memory files 00:16:38.097 23:22:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:38.097 23:22:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:16:38.097 23:22:01 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:38.097 23:22:01 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:16:38.097 23:22:01 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:16:38.097 23:22:01 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:16:38.097 23:22:01 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:38.097 23:22:01 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:16:38.097 ************************************ 00:16:38.097 END TEST ftl_bdevperf 00:16:38.097 ************************************ 00:16:38.097 00:16:38.097 real 0m21.709s 00:16:38.097 user 0m24.407s 00:16:38.097 sys 0m0.918s 00:16:38.097 23:22:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:38.097 23:22:01 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:38.097 23:22:01 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:38.097 23:22:01 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:38.097 23:22:01 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:38.097 23:22:01 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:38.097 ************************************ 00:16:38.097 START TEST ftl_trim 00:16:38.097 ************************************ 00:16:38.097 23:22:01 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:38.097 * Looking for test storage... 00:16:38.097 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:38.097 23:22:01 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:38.097 23:22:01 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:38.097 23:22:01 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:16:38.097 23:22:01 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:38.097 23:22:01 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:38.097 23:22:01 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:38.097 23:22:01 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:38.097 23:22:01 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:16:38.097 23:22:01 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:16:38.097 23:22:01 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:16:38.097 23:22:01 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:16:38.097 23:22:01 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:16:38.097 23:22:01 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:16:38.097 23:22:01 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:16:38.097 23:22:01 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:38.097 23:22:01 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:16:38.097 23:22:01 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:16:38.097 23:22:01 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:38.097 23:22:01 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:38.097 23:22:01 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:16:38.358 23:22:01 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:16:38.359 23:22:01 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:38.359 23:22:01 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:16:38.359 23:22:01 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:16:38.359 23:22:01 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:16:38.359 23:22:01 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:16:38.359 23:22:01 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:38.359 23:22:01 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:16:38.359 23:22:01 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:16:38.359 23:22:01 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:38.359 23:22:01 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:38.359 23:22:01 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:16:38.359 23:22:01 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:38.359 23:22:01 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:38.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.359 --rc genhtml_branch_coverage=1 00:16:38.359 --rc genhtml_function_coverage=1 00:16:38.359 --rc genhtml_legend=1 00:16:38.359 --rc geninfo_all_blocks=1 00:16:38.359 --rc geninfo_unexecuted_blocks=1 00:16:38.359 00:16:38.359 ' 00:16:38.359 23:22:01 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:38.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.359 --rc genhtml_branch_coverage=1 00:16:38.359 --rc genhtml_function_coverage=1 00:16:38.359 --rc genhtml_legend=1 00:16:38.359 --rc geninfo_all_blocks=1 00:16:38.359 --rc geninfo_unexecuted_blocks=1 00:16:38.359 00:16:38.359 ' 00:16:38.359 23:22:01 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:38.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.359 --rc genhtml_branch_coverage=1 00:16:38.359 --rc genhtml_function_coverage=1 00:16:38.359 --rc genhtml_legend=1 00:16:38.359 --rc geninfo_all_blocks=1 00:16:38.359 --rc geninfo_unexecuted_blocks=1 00:16:38.359 00:16:38.359 ' 00:16:38.359 23:22:01 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:38.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.359 --rc genhtml_branch_coverage=1 00:16:38.359 --rc genhtml_function_coverage=1 00:16:38.359 --rc genhtml_legend=1 00:16:38.359 --rc geninfo_all_blocks=1 00:16:38.359 --rc geninfo_unexecuted_blocks=1 00:16:38.359 00:16:38.359 ' 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=84731 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 84731 00:16:38.359 23:22:01 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 84731 ']' 00:16:38.359 23:22:01 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.359 23:22:01 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:38.359 23:22:01 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.359 23:22:01 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:38.359 23:22:01 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:38.359 23:22:01 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:38.359 [2024-11-17 23:22:02.036920] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:16:38.359 [2024-11-17 23:22:02.037087] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84731 ] 00:16:38.621 [2024-11-17 23:22:02.184553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:38.621 [2024-11-17 23:22:02.216742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:38.621 [2024-11-17 23:22:02.217104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:38.621 [2024-11-17 23:22:02.217116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.194 23:22:02 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:39.194 23:22:02 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:16:39.194 23:22:02 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:39.194 23:22:02 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:16:39.194 23:22:02 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:39.194 23:22:02 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:16:39.194 23:22:02 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:16:39.194 23:22:02 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:39.453 23:22:03 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:39.453 23:22:03 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:16:39.453 23:22:03 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:39.453 23:22:03 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:16:39.453 23:22:03 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:39.453 23:22:03 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:16:39.453 23:22:03 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:16:39.453 23:22:03 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:39.714 23:22:03 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:39.714 { 00:16:39.714 "name": "nvme0n1", 00:16:39.714 "aliases": [ 00:16:39.714 "6a477357-3028-4722-a161-b5829fc3729a" 00:16:39.714 ], 00:16:39.714 "product_name": "NVMe disk", 00:16:39.714 "block_size": 4096, 00:16:39.714 "num_blocks": 1310720, 00:16:39.714 "uuid": "6a477357-3028-4722-a161-b5829fc3729a", 00:16:39.714 "numa_id": -1, 00:16:39.714 "assigned_rate_limits": { 00:16:39.714 "rw_ios_per_sec": 0, 00:16:39.714 "rw_mbytes_per_sec": 0, 00:16:39.714 "r_mbytes_per_sec": 0, 00:16:39.714 "w_mbytes_per_sec": 0 00:16:39.714 }, 00:16:39.714 "claimed": true, 00:16:39.714 "claim_type": "read_many_write_one", 00:16:39.714 "zoned": false, 00:16:39.714 "supported_io_types": { 00:16:39.714 "read": true, 00:16:39.714 "write": true, 00:16:39.714 "unmap": true, 00:16:39.714 "flush": true, 00:16:39.714 "reset": true, 00:16:39.714 "nvme_admin": true, 00:16:39.714 "nvme_io": true, 00:16:39.714 "nvme_io_md": false, 00:16:39.714 "write_zeroes": true, 00:16:39.714 "zcopy": false, 00:16:39.714 "get_zone_info": false, 00:16:39.714 "zone_management": false, 00:16:39.714 "zone_append": false, 00:16:39.714 "compare": true, 00:16:39.714 "compare_and_write": false, 00:16:39.714 "abort": true, 00:16:39.714 "seek_hole": false, 00:16:39.714 "seek_data": false, 00:16:39.714 "copy": true, 00:16:39.714 "nvme_iov_md": false 00:16:39.714 }, 00:16:39.714 "driver_specific": { 00:16:39.714 "nvme": [ 00:16:39.714 { 00:16:39.714 "pci_address": "0000:00:11.0", 00:16:39.714 "trid": { 00:16:39.714 "trtype": "PCIe", 00:16:39.714 "traddr": "0000:00:11.0" 00:16:39.714 }, 00:16:39.714 "ctrlr_data": { 00:16:39.714 "cntlid": 0, 00:16:39.714 "vendor_id": "0x1b36", 00:16:39.714 "model_number": "QEMU NVMe Ctrl", 00:16:39.714 "serial_number": "12341", 00:16:39.714 "firmware_revision": "8.0.0", 00:16:39.714 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:39.714 "oacs": { 00:16:39.714 "security": 0, 00:16:39.714 "format": 1, 00:16:39.714 "firmware": 0, 00:16:39.714 "ns_manage": 1 00:16:39.714 }, 00:16:39.714 "multi_ctrlr": false, 00:16:39.714 "ana_reporting": false 00:16:39.714 }, 00:16:39.714 "vs": { 00:16:39.714 "nvme_version": "1.4" 00:16:39.714 }, 00:16:39.714 "ns_data": { 00:16:39.714 "id": 1, 00:16:39.714 "can_share": false 00:16:39.714 } 00:16:39.714 } 00:16:39.714 ], 00:16:39.714 "mp_policy": "active_passive" 00:16:39.714 } 00:16:39.714 } 00:16:39.714 ]' 00:16:39.714 23:22:03 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:39.714 23:22:03 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:16:39.714 23:22:03 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:39.714 23:22:03 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:16:39.714 23:22:03 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:16:39.714 23:22:03 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:16:39.714 23:22:03 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:16:39.714 23:22:03 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:39.714 23:22:03 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:16:39.714 23:22:03 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:39.714 23:22:03 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:40.007 23:22:03 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=becfe7ae-61a4-4a4f-bd76-0003737864aa 00:16:40.007 23:22:03 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:16:40.007 23:22:03 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u becfe7ae-61a4-4a4f-bd76-0003737864aa 00:16:40.271 23:22:03 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:40.533 23:22:04 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=7b04df80-bda7-404d-9b51-ea9e73ca0423 00:16:40.533 23:22:04 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 7b04df80-bda7-404d-9b51-ea9e73ca0423 00:16:40.799 23:22:04 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=3efe3388-79c9-4162-bef5-729e52fbc020 00:16:40.799 23:22:04 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 3efe3388-79c9-4162-bef5-729e52fbc020 00:16:40.799 23:22:04 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:16:40.799 23:22:04 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:40.799 23:22:04 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=3efe3388-79c9-4162-bef5-729e52fbc020 00:16:40.799 23:22:04 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:16:40.799 23:22:04 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 3efe3388-79c9-4162-bef5-729e52fbc020 00:16:40.799 23:22:04 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=3efe3388-79c9-4162-bef5-729e52fbc020 00:16:40.799 23:22:04 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:40.799 23:22:04 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:16:40.799 23:22:04 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:16:40.799 23:22:04 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3efe3388-79c9-4162-bef5-729e52fbc020 00:16:41.064 23:22:04 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:41.064 { 00:16:41.064 "name": "3efe3388-79c9-4162-bef5-729e52fbc020", 00:16:41.064 "aliases": [ 00:16:41.064 "lvs/nvme0n1p0" 00:16:41.064 ], 00:16:41.064 "product_name": "Logical Volume", 00:16:41.064 "block_size": 4096, 00:16:41.064 "num_blocks": 26476544, 00:16:41.064 "uuid": "3efe3388-79c9-4162-bef5-729e52fbc020", 00:16:41.064 "assigned_rate_limits": { 00:16:41.064 "rw_ios_per_sec": 0, 00:16:41.064 "rw_mbytes_per_sec": 0, 00:16:41.064 "r_mbytes_per_sec": 0, 00:16:41.064 "w_mbytes_per_sec": 0 00:16:41.064 }, 00:16:41.064 "claimed": false, 00:16:41.064 "zoned": false, 00:16:41.064 "supported_io_types": { 00:16:41.064 "read": true, 00:16:41.064 "write": true, 00:16:41.064 "unmap": true, 00:16:41.064 "flush": false, 00:16:41.064 "reset": true, 00:16:41.064 "nvme_admin": false, 00:16:41.064 "nvme_io": false, 00:16:41.064 "nvme_io_md": false, 00:16:41.064 "write_zeroes": true, 00:16:41.064 "zcopy": false, 00:16:41.064 "get_zone_info": false, 00:16:41.064 "zone_management": false, 00:16:41.064 "zone_append": false, 00:16:41.064 "compare": false, 00:16:41.064 "compare_and_write": false, 00:16:41.064 "abort": false, 00:16:41.064 "seek_hole": true, 00:16:41.064 "seek_data": true, 00:16:41.064 "copy": false, 00:16:41.064 "nvme_iov_md": false 00:16:41.064 }, 00:16:41.064 "driver_specific": { 00:16:41.064 "lvol": { 00:16:41.064 "lvol_store_uuid": "7b04df80-bda7-404d-9b51-ea9e73ca0423", 00:16:41.064 "base_bdev": "nvme0n1", 00:16:41.064 "thin_provision": true, 00:16:41.064 "num_allocated_clusters": 0, 00:16:41.064 "snapshot": false, 00:16:41.064 "clone": false, 00:16:41.064 "esnap_clone": false 00:16:41.064 } 00:16:41.064 } 00:16:41.065 } 00:16:41.065 ]' 00:16:41.065 23:22:04 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:41.065 23:22:04 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:16:41.065 23:22:04 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:41.065 23:22:04 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:41.065 23:22:04 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:41.065 23:22:04 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:16:41.065 23:22:04 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:16:41.065 23:22:04 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:16:41.065 23:22:04 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:41.326 23:22:04 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:41.326 23:22:04 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:41.326 23:22:04 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 3efe3388-79c9-4162-bef5-729e52fbc020 00:16:41.326 23:22:04 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=3efe3388-79c9-4162-bef5-729e52fbc020 00:16:41.326 23:22:04 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:41.326 23:22:04 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:16:41.326 23:22:04 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:16:41.326 23:22:04 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3efe3388-79c9-4162-bef5-729e52fbc020 00:16:41.587 23:22:05 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:41.588 { 00:16:41.588 "name": "3efe3388-79c9-4162-bef5-729e52fbc020", 00:16:41.588 "aliases": [ 00:16:41.588 "lvs/nvme0n1p0" 00:16:41.588 ], 00:16:41.588 "product_name": "Logical Volume", 00:16:41.588 "block_size": 4096, 00:16:41.588 "num_blocks": 26476544, 00:16:41.588 "uuid": "3efe3388-79c9-4162-bef5-729e52fbc020", 00:16:41.588 "assigned_rate_limits": { 00:16:41.588 "rw_ios_per_sec": 0, 00:16:41.588 "rw_mbytes_per_sec": 0, 00:16:41.588 "r_mbytes_per_sec": 0, 00:16:41.588 "w_mbytes_per_sec": 0 00:16:41.588 }, 00:16:41.588 "claimed": false, 00:16:41.588 "zoned": false, 00:16:41.588 "supported_io_types": { 00:16:41.588 "read": true, 00:16:41.588 "write": true, 00:16:41.588 "unmap": true, 00:16:41.588 "flush": false, 00:16:41.588 "reset": true, 00:16:41.588 "nvme_admin": false, 00:16:41.588 "nvme_io": false, 00:16:41.588 "nvme_io_md": false, 00:16:41.588 "write_zeroes": true, 00:16:41.588 "zcopy": false, 00:16:41.588 "get_zone_info": false, 00:16:41.588 "zone_management": false, 00:16:41.588 "zone_append": false, 00:16:41.588 "compare": false, 00:16:41.588 "compare_and_write": false, 00:16:41.588 "abort": false, 00:16:41.588 "seek_hole": true, 00:16:41.588 "seek_data": true, 00:16:41.588 "copy": false, 00:16:41.588 "nvme_iov_md": false 00:16:41.588 }, 00:16:41.588 "driver_specific": { 00:16:41.588 "lvol": { 00:16:41.588 "lvol_store_uuid": "7b04df80-bda7-404d-9b51-ea9e73ca0423", 00:16:41.588 "base_bdev": "nvme0n1", 00:16:41.588 "thin_provision": true, 00:16:41.588 "num_allocated_clusters": 0, 00:16:41.588 "snapshot": false, 00:16:41.588 "clone": false, 00:16:41.588 "esnap_clone": false 00:16:41.588 } 00:16:41.588 } 00:16:41.588 } 00:16:41.588 ]' 00:16:41.588 23:22:05 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:41.588 23:22:05 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:16:41.588 23:22:05 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:41.588 23:22:05 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:41.588 23:22:05 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:41.588 23:22:05 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:16:41.588 23:22:05 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:16:41.588 23:22:05 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:41.849 23:22:05 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:41.849 23:22:05 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:41.849 23:22:05 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 3efe3388-79c9-4162-bef5-729e52fbc020 00:16:41.849 23:22:05 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=3efe3388-79c9-4162-bef5-729e52fbc020 00:16:41.849 23:22:05 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:41.849 23:22:05 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:16:41.849 23:22:05 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:16:41.849 23:22:05 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3efe3388-79c9-4162-bef5-729e52fbc020 00:16:42.110 23:22:05 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:42.111 { 00:16:42.111 "name": "3efe3388-79c9-4162-bef5-729e52fbc020", 00:16:42.111 "aliases": [ 00:16:42.111 "lvs/nvme0n1p0" 00:16:42.111 ], 00:16:42.111 "product_name": "Logical Volume", 00:16:42.111 "block_size": 4096, 00:16:42.111 "num_blocks": 26476544, 00:16:42.111 "uuid": "3efe3388-79c9-4162-bef5-729e52fbc020", 00:16:42.111 "assigned_rate_limits": { 00:16:42.111 "rw_ios_per_sec": 0, 00:16:42.111 "rw_mbytes_per_sec": 0, 00:16:42.111 "r_mbytes_per_sec": 0, 00:16:42.111 "w_mbytes_per_sec": 0 00:16:42.111 }, 00:16:42.111 "claimed": false, 00:16:42.111 "zoned": false, 00:16:42.111 "supported_io_types": { 00:16:42.111 "read": true, 00:16:42.111 "write": true, 00:16:42.111 "unmap": true, 00:16:42.111 "flush": false, 00:16:42.111 "reset": true, 00:16:42.111 "nvme_admin": false, 00:16:42.111 "nvme_io": false, 00:16:42.111 "nvme_io_md": false, 00:16:42.111 "write_zeroes": true, 00:16:42.111 "zcopy": false, 00:16:42.111 "get_zone_info": false, 00:16:42.111 "zone_management": false, 00:16:42.111 "zone_append": false, 00:16:42.111 "compare": false, 00:16:42.111 "compare_and_write": false, 00:16:42.111 "abort": false, 00:16:42.111 "seek_hole": true, 00:16:42.111 "seek_data": true, 00:16:42.111 "copy": false, 00:16:42.111 "nvme_iov_md": false 00:16:42.111 }, 00:16:42.111 "driver_specific": { 00:16:42.111 "lvol": { 00:16:42.111 "lvol_store_uuid": "7b04df80-bda7-404d-9b51-ea9e73ca0423", 00:16:42.111 "base_bdev": "nvme0n1", 00:16:42.111 "thin_provision": true, 00:16:42.111 "num_allocated_clusters": 0, 00:16:42.111 "snapshot": false, 00:16:42.111 "clone": false, 00:16:42.111 "esnap_clone": false 00:16:42.111 } 00:16:42.111 } 00:16:42.111 } 00:16:42.111 ]' 00:16:42.111 23:22:05 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:42.111 23:22:05 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:16:42.111 23:22:05 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:42.111 23:22:05 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:42.111 23:22:05 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:42.111 23:22:05 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:16:42.111 23:22:05 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:16:42.111 23:22:05 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 3efe3388-79c9-4162-bef5-729e52fbc020 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:16:42.384 [2024-11-17 23:22:05.945955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.384 [2024-11-17 23:22:05.946004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:42.384 [2024-11-17 23:22:05.946019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:42.384 [2024-11-17 23:22:05.946030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.384 [2024-11-17 23:22:05.948590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.384 [2024-11-17 23:22:05.948629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:42.384 [2024-11-17 23:22:05.948640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.528 ms 00:16:42.384 [2024-11-17 23:22:05.948650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.385 [2024-11-17 23:22:05.948753] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:42.385 [2024-11-17 23:22:05.949044] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:42.385 [2024-11-17 23:22:05.949060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.385 [2024-11-17 23:22:05.949071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:42.385 [2024-11-17 23:22:05.949081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:16:42.385 [2024-11-17 23:22:05.949092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.385 [2024-11-17 23:22:05.949209] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID af926b10-56ee-4fe0-b2b0-4ca61c941c28 00:16:42.385 [2024-11-17 23:22:05.950575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.385 [2024-11-17 23:22:05.950609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:42.385 [2024-11-17 23:22:05.950621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:16:42.385 [2024-11-17 23:22:05.950629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.385 [2024-11-17 23:22:05.957850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.385 [2024-11-17 23:22:05.957896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:42.385 [2024-11-17 23:22:05.957908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.115 ms 00:16:42.385 [2024-11-17 23:22:05.957915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.385 [2024-11-17 23:22:05.958036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.385 [2024-11-17 23:22:05.958048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:42.385 [2024-11-17 23:22:05.958058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:16:42.385 [2024-11-17 23:22:05.958089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.385 [2024-11-17 23:22:05.958128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.385 [2024-11-17 23:22:05.958137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:42.386 [2024-11-17 23:22:05.958148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:42.386 [2024-11-17 23:22:05.958156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.386 [2024-11-17 23:22:05.958193] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:42.386 [2024-11-17 23:22:05.959983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.386 [2024-11-17 23:22:05.960014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:42.386 [2024-11-17 23:22:05.960035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.796 ms 00:16:42.386 [2024-11-17 23:22:05.960047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.386 [2024-11-17 23:22:05.960116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.386 [2024-11-17 23:22:05.960128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:42.386 [2024-11-17 23:22:05.960137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:42.386 [2024-11-17 23:22:05.960148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.386 [2024-11-17 23:22:05.960179] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:42.386 [2024-11-17 23:22:05.960329] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:42.386 [2024-11-17 23:22:05.960355] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:42.386 [2024-11-17 23:22:05.960369] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:42.386 [2024-11-17 23:22:05.960379] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:42.386 [2024-11-17 23:22:05.960392] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:42.386 [2024-11-17 23:22:05.960410] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:42.386 [2024-11-17 23:22:05.960420] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:42.386 [2024-11-17 23:22:05.960427] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:42.386 [2024-11-17 23:22:05.960436] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:42.386 [2024-11-17 23:22:05.960446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.386 [2024-11-17 23:22:05.960455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:42.386 [2024-11-17 23:22:05.960463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:16:42.386 [2024-11-17 23:22:05.960473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.387 [2024-11-17 23:22:05.960578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.387 [2024-11-17 23:22:05.960596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:42.387 [2024-11-17 23:22:05.960604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:16:42.387 [2024-11-17 23:22:05.960613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.387 [2024-11-17 23:22:05.960743] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:42.387 [2024-11-17 23:22:05.960757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:42.387 [2024-11-17 23:22:05.960767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:42.387 [2024-11-17 23:22:05.960777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.388 [2024-11-17 23:22:05.960787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:42.388 [2024-11-17 23:22:05.960796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:42.388 [2024-11-17 23:22:05.960804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:42.388 [2024-11-17 23:22:05.960813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:42.388 [2024-11-17 23:22:05.960821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:42.388 [2024-11-17 23:22:05.960833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:42.388 [2024-11-17 23:22:05.960840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:42.388 [2024-11-17 23:22:05.960850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:42.388 [2024-11-17 23:22:05.960858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:42.388 [2024-11-17 23:22:05.960869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:42.388 [2024-11-17 23:22:05.960896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:42.388 [2024-11-17 23:22:05.960906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.388 [2024-11-17 23:22:05.960915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:42.388 [2024-11-17 23:22:05.960925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:42.388 [2024-11-17 23:22:05.960933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.388 [2024-11-17 23:22:05.960942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:42.388 [2024-11-17 23:22:05.960951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:42.388 [2024-11-17 23:22:05.960960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:42.393 [2024-11-17 23:22:05.960968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:42.393 [2024-11-17 23:22:05.960992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:42.393 [2024-11-17 23:22:05.960999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:42.393 [2024-11-17 23:22:05.961009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:42.393 [2024-11-17 23:22:05.961016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:42.393 [2024-11-17 23:22:05.961025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:42.393 [2024-11-17 23:22:05.961032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:42.393 [2024-11-17 23:22:05.961046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:42.393 [2024-11-17 23:22:05.961053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:42.393 [2024-11-17 23:22:05.961061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:42.393 [2024-11-17 23:22:05.961068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:42.393 [2024-11-17 23:22:05.961079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:42.393 [2024-11-17 23:22:05.961087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:42.393 [2024-11-17 23:22:05.961095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:42.393 [2024-11-17 23:22:05.961102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:42.393 [2024-11-17 23:22:05.961111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:42.393 [2024-11-17 23:22:05.961118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:42.393 [2024-11-17 23:22:05.961126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.393 [2024-11-17 23:22:05.961133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:42.393 [2024-11-17 23:22:05.961141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:42.393 [2024-11-17 23:22:05.961147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.393 [2024-11-17 23:22:05.961156] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:42.394 [2024-11-17 23:22:05.961173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:42.394 [2024-11-17 23:22:05.961184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:42.394 [2024-11-17 23:22:05.961191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.394 [2024-11-17 23:22:05.961200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:42.394 [2024-11-17 23:22:05.961207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:42.394 [2024-11-17 23:22:05.961216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:42.394 [2024-11-17 23:22:05.961223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:42.394 [2024-11-17 23:22:05.961231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:42.394 [2024-11-17 23:22:05.961238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:42.394 [2024-11-17 23:22:05.961250] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:42.394 [2024-11-17 23:22:05.961259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:42.395 [2024-11-17 23:22:05.961269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:42.395 [2024-11-17 23:22:05.961277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:42.395 [2024-11-17 23:22:05.961287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:42.395 [2024-11-17 23:22:05.961294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:42.395 [2024-11-17 23:22:05.961303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:42.395 [2024-11-17 23:22:05.961310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:42.395 [2024-11-17 23:22:05.961322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:42.395 [2024-11-17 23:22:05.961330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:42.395 [2024-11-17 23:22:05.961339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:42.395 [2024-11-17 23:22:05.961347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:42.395 [2024-11-17 23:22:05.961355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:42.395 [2024-11-17 23:22:05.961363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:42.395 [2024-11-17 23:22:05.961371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:42.395 [2024-11-17 23:22:05.961379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:42.395 [2024-11-17 23:22:05.961388] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:42.395 [2024-11-17 23:22:05.961396] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:42.395 [2024-11-17 23:22:05.961408] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:42.395 [2024-11-17 23:22:05.961416] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:42.395 [2024-11-17 23:22:05.961425] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:42.395 [2024-11-17 23:22:05.961432] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:42.395 [2024-11-17 23:22:05.961442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.395 [2024-11-17 23:22:05.961449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:42.395 [2024-11-17 23:22:05.961460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.763 ms 00:16:42.395 [2024-11-17 23:22:05.961467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.395 [2024-11-17 23:22:05.961560] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:42.395 [2024-11-17 23:22:05.961569] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:44.943 [2024-11-17 23:22:08.594136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.944 [2024-11-17 23:22:08.594367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:44.944 [2024-11-17 23:22:08.594496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2632.562 ms 00:16:44.944 [2024-11-17 23:22:08.594526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.944 [2024-11-17 23:22:08.605309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.944 [2024-11-17 23:22:08.605466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:44.944 [2024-11-17 23:22:08.605528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.650 ms 00:16:44.944 [2024-11-17 23:22:08.605551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.944 [2024-11-17 23:22:08.605705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.944 [2024-11-17 23:22:08.605871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:44.944 [2024-11-17 23:22:08.605911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:44.944 [2024-11-17 23:22:08.605934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.944 [2024-11-17 23:22:08.627056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.944 [2024-11-17 23:22:08.627254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:44.944 [2024-11-17 23:22:08.627472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.068 ms 00:16:44.944 [2024-11-17 23:22:08.627518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.944 [2024-11-17 23:22:08.627677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.944 [2024-11-17 23:22:08.627847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:44.944 [2024-11-17 23:22:08.627870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:44.944 [2024-11-17 23:22:08.627906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.944 [2024-11-17 23:22:08.628367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.944 [2024-11-17 23:22:08.628398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:44.944 [2024-11-17 23:22:08.628415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:16:44.944 [2024-11-17 23:22:08.628429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.944 [2024-11-17 23:22:08.628618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.944 [2024-11-17 23:22:08.628655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:44.944 [2024-11-17 23:22:08.628683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:16:44.944 [2024-11-17 23:22:08.628699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.944 [2024-11-17 23:22:08.636617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.944 [2024-11-17 23:22:08.636648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:44.944 [2024-11-17 23:22:08.636660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.871 ms 00:16:44.944 [2024-11-17 23:22:08.636668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.944 [2024-11-17 23:22:08.645727] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:44.944 [2024-11-17 23:22:08.662749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.944 [2024-11-17 23:22:08.662783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:44.944 [2024-11-17 23:22:08.662794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.952 ms 00:16:44.944 [2024-11-17 23:22:08.662803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.944 [2024-11-17 23:22:08.725216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.944 [2024-11-17 23:22:08.725256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:44.944 [2024-11-17 23:22:08.725267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.318 ms 00:16:44.944 [2024-11-17 23:22:08.725279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.944 [2024-11-17 23:22:08.725479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.944 [2024-11-17 23:22:08.725493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:44.944 [2024-11-17 23:22:08.725502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:16:44.944 [2024-11-17 23:22:08.725511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.944 [2024-11-17 23:22:08.728593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.944 [2024-11-17 23:22:08.728628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:44.944 [2024-11-17 23:22:08.728637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.056 ms 00:16:44.944 [2024-11-17 23:22:08.728647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.944 [2024-11-17 23:22:08.731287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.944 [2024-11-17 23:22:08.731319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:44.944 [2024-11-17 23:22:08.731328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.591 ms 00:16:44.944 [2024-11-17 23:22:08.731337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.944 [2024-11-17 23:22:08.731669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.944 [2024-11-17 23:22:08.731687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:44.944 [2024-11-17 23:22:08.731697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:16:44.944 [2024-11-17 23:22:08.731709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.205 [2024-11-17 23:22:08.763606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.205 [2024-11-17 23:22:08.763711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:45.205 [2024-11-17 23:22:08.763759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.842 ms 00:16:45.205 [2024-11-17 23:22:08.763782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.205 [2024-11-17 23:22:08.771167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.205 [2024-11-17 23:22:08.771240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:45.205 [2024-11-17 23:22:08.771280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.154 ms 00:16:45.205 [2024-11-17 23:22:08.771317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.205 [2024-11-17 23:22:08.775062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.205 [2024-11-17 23:22:08.775095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:45.205 [2024-11-17 23:22:08.775104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.681 ms 00:16:45.205 [2024-11-17 23:22:08.775114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.205 [2024-11-17 23:22:08.778558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.205 [2024-11-17 23:22:08.778714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:45.205 [2024-11-17 23:22:08.778728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.406 ms 00:16:45.205 [2024-11-17 23:22:08.778740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.205 [2024-11-17 23:22:08.778778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.205 [2024-11-17 23:22:08.778791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:45.205 [2024-11-17 23:22:08.778811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:45.205 [2024-11-17 23:22:08.778820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.205 [2024-11-17 23:22:08.778923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.205 [2024-11-17 23:22:08.778935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:45.205 [2024-11-17 23:22:08.778943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:16:45.205 [2024-11-17 23:22:08.778953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.205 [2024-11-17 23:22:08.779952] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:45.205 [2024-11-17 23:22:08.780926] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2833.715 ms, result 0 00:16:45.205 [2024-11-17 23:22:08.781609] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:45.205 { 00:16:45.205 "name": "ftl0", 00:16:45.205 "uuid": "af926b10-56ee-4fe0-b2b0-4ca61c941c28" 00:16:45.205 } 00:16:45.205 23:22:08 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:16:45.205 23:22:08 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:16:45.205 23:22:08 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:45.205 23:22:08 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:16:45.205 23:22:08 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:45.205 23:22:08 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:45.205 23:22:08 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:45.205 23:22:09 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:45.482 [ 00:16:45.482 { 00:16:45.482 "name": "ftl0", 00:16:45.482 "aliases": [ 00:16:45.482 "af926b10-56ee-4fe0-b2b0-4ca61c941c28" 00:16:45.482 ], 00:16:45.482 "product_name": "FTL disk", 00:16:45.482 "block_size": 4096, 00:16:45.482 "num_blocks": 23592960, 00:16:45.482 "uuid": "af926b10-56ee-4fe0-b2b0-4ca61c941c28", 00:16:45.482 "assigned_rate_limits": { 00:16:45.482 "rw_ios_per_sec": 0, 00:16:45.482 "rw_mbytes_per_sec": 0, 00:16:45.482 "r_mbytes_per_sec": 0, 00:16:45.482 "w_mbytes_per_sec": 0 00:16:45.482 }, 00:16:45.482 "claimed": false, 00:16:45.482 "zoned": false, 00:16:45.482 "supported_io_types": { 00:16:45.482 "read": true, 00:16:45.482 "write": true, 00:16:45.482 "unmap": true, 00:16:45.482 "flush": true, 00:16:45.482 "reset": false, 00:16:45.482 "nvme_admin": false, 00:16:45.482 "nvme_io": false, 00:16:45.482 "nvme_io_md": false, 00:16:45.482 "write_zeroes": true, 00:16:45.482 "zcopy": false, 00:16:45.482 "get_zone_info": false, 00:16:45.482 "zone_management": false, 00:16:45.482 "zone_append": false, 00:16:45.482 "compare": false, 00:16:45.482 "compare_and_write": false, 00:16:45.482 "abort": false, 00:16:45.482 "seek_hole": false, 00:16:45.482 "seek_data": false, 00:16:45.482 "copy": false, 00:16:45.482 "nvme_iov_md": false 00:16:45.482 }, 00:16:45.482 "driver_specific": { 00:16:45.482 "ftl": { 00:16:45.482 "base_bdev": "3efe3388-79c9-4162-bef5-729e52fbc020", 00:16:45.482 "cache": "nvc0n1p0" 00:16:45.482 } 00:16:45.482 } 00:16:45.482 } 00:16:45.482 ] 00:16:45.482 23:22:09 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:16:45.482 23:22:09 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:16:45.482 23:22:09 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:45.750 23:22:09 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:16:45.750 23:22:09 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:16:46.007 23:22:09 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:16:46.007 { 00:16:46.007 "name": "ftl0", 00:16:46.007 "aliases": [ 00:16:46.007 "af926b10-56ee-4fe0-b2b0-4ca61c941c28" 00:16:46.007 ], 00:16:46.007 "product_name": "FTL disk", 00:16:46.007 "block_size": 4096, 00:16:46.007 "num_blocks": 23592960, 00:16:46.007 "uuid": "af926b10-56ee-4fe0-b2b0-4ca61c941c28", 00:16:46.007 "assigned_rate_limits": { 00:16:46.007 "rw_ios_per_sec": 0, 00:16:46.007 "rw_mbytes_per_sec": 0, 00:16:46.007 "r_mbytes_per_sec": 0, 00:16:46.007 "w_mbytes_per_sec": 0 00:16:46.007 }, 00:16:46.007 "claimed": false, 00:16:46.007 "zoned": false, 00:16:46.007 "supported_io_types": { 00:16:46.007 "read": true, 00:16:46.007 "write": true, 00:16:46.007 "unmap": true, 00:16:46.007 "flush": true, 00:16:46.007 "reset": false, 00:16:46.007 "nvme_admin": false, 00:16:46.007 "nvme_io": false, 00:16:46.007 "nvme_io_md": false, 00:16:46.007 "write_zeroes": true, 00:16:46.007 "zcopy": false, 00:16:46.007 "get_zone_info": false, 00:16:46.007 "zone_management": false, 00:16:46.007 "zone_append": false, 00:16:46.007 "compare": false, 00:16:46.007 "compare_and_write": false, 00:16:46.007 "abort": false, 00:16:46.007 "seek_hole": false, 00:16:46.007 "seek_data": false, 00:16:46.007 "copy": false, 00:16:46.007 "nvme_iov_md": false 00:16:46.007 }, 00:16:46.007 "driver_specific": { 00:16:46.007 "ftl": { 00:16:46.007 "base_bdev": "3efe3388-79c9-4162-bef5-729e52fbc020", 00:16:46.007 "cache": "nvc0n1p0" 00:16:46.007 } 00:16:46.007 } 00:16:46.007 } 00:16:46.007 ]' 00:16:46.007 23:22:09 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:16:46.007 23:22:09 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:16:46.007 23:22:09 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:46.267 [2024-11-17 23:22:09.833000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.267 [2024-11-17 23:22:09.833044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:46.267 [2024-11-17 23:22:09.833060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:46.267 [2024-11-17 23:22:09.833068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.267 [2024-11-17 23:22:09.833109] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:46.267 [2024-11-17 23:22:09.833662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.267 [2024-11-17 23:22:09.833682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:46.267 [2024-11-17 23:22:09.833691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:16:46.267 [2024-11-17 23:22:09.833701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.267 [2024-11-17 23:22:09.834317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.267 [2024-11-17 23:22:09.834389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:46.267 [2024-11-17 23:22:09.834399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:16:46.267 [2024-11-17 23:22:09.834409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.267 [2024-11-17 23:22:09.838066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.267 [2024-11-17 23:22:09.838090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:46.267 [2024-11-17 23:22:09.838100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.632 ms 00:16:46.267 [2024-11-17 23:22:09.838110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.267 [2024-11-17 23:22:09.845257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.267 [2024-11-17 23:22:09.845290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:46.267 [2024-11-17 23:22:09.845301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.070 ms 00:16:46.267 [2024-11-17 23:22:09.845314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.267 [2024-11-17 23:22:09.846938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.267 [2024-11-17 23:22:09.846974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:46.267 [2024-11-17 23:22:09.846983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.525 ms 00:16:46.267 [2024-11-17 23:22:09.846992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.267 [2024-11-17 23:22:09.851704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.267 [2024-11-17 23:22:09.851741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:46.267 [2024-11-17 23:22:09.851751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.664 ms 00:16:46.267 [2024-11-17 23:22:09.851761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.267 [2024-11-17 23:22:09.851962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.267 [2024-11-17 23:22:09.851976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:46.267 [2024-11-17 23:22:09.851985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:16:46.267 [2024-11-17 23:22:09.851994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.267 [2024-11-17 23:22:09.853790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.267 [2024-11-17 23:22:09.853823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:46.267 [2024-11-17 23:22:09.853832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.753 ms 00:16:46.267 [2024-11-17 23:22:09.853843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.267 [2024-11-17 23:22:09.855307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.267 [2024-11-17 23:22:09.855338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:46.267 [2024-11-17 23:22:09.855347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.407 ms 00:16:46.267 [2024-11-17 23:22:09.855357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.267 [2024-11-17 23:22:09.856436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.267 [2024-11-17 23:22:09.856575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:46.267 [2024-11-17 23:22:09.856590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.035 ms 00:16:46.267 [2024-11-17 23:22:09.856600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.267 [2024-11-17 23:22:09.857732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.267 [2024-11-17 23:22:09.857763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:46.267 [2024-11-17 23:22:09.857771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.032 ms 00:16:46.267 [2024-11-17 23:22:09.857781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.267 [2024-11-17 23:22:09.857822] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:46.267 [2024-11-17 23:22:09.857838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:46.267 [2024-11-17 23:22:09.857849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:46.267 [2024-11-17 23:22:09.857861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:46.267 [2024-11-17 23:22:09.857868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:46.267 [2024-11-17 23:22:09.857891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:46.267 [2024-11-17 23:22:09.857900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:46.267 [2024-11-17 23:22:09.857910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.857917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.857927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.857934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.857944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.857951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.857960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.857968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.857977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.857984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.857994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:46.268 [2024-11-17 23:22:09.858677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:46.269 [2024-11-17 23:22:09.858686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:46.269 [2024-11-17 23:22:09.858693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:46.269 [2024-11-17 23:22:09.858705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:46.269 [2024-11-17 23:22:09.858712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:46.269 [2024-11-17 23:22:09.858733] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:46.269 [2024-11-17 23:22:09.858741] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: af926b10-56ee-4fe0-b2b0-4ca61c941c28 00:16:46.269 [2024-11-17 23:22:09.858750] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:46.269 [2024-11-17 23:22:09.858758] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:46.269 [2024-11-17 23:22:09.858777] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:46.269 [2024-11-17 23:22:09.858787] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:46.269 [2024-11-17 23:22:09.858795] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:46.269 [2024-11-17 23:22:09.858803] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:46.269 [2024-11-17 23:22:09.858812] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:46.269 [2024-11-17 23:22:09.858819] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:46.269 [2024-11-17 23:22:09.858827] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:46.269 [2024-11-17 23:22:09.858834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.269 [2024-11-17 23:22:09.858843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:46.269 [2024-11-17 23:22:09.858851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.013 ms 00:16:46.269 [2024-11-17 23:22:09.858872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.269 [2024-11-17 23:22:09.860778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.269 [2024-11-17 23:22:09.860805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:46.269 [2024-11-17 23:22:09.860814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.848 ms 00:16:46.269 [2024-11-17 23:22:09.860824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.269 [2024-11-17 23:22:09.860940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.269 [2024-11-17 23:22:09.860951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:46.269 [2024-11-17 23:22:09.860960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:16:46.269 [2024-11-17 23:22:09.860969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.269 [2024-11-17 23:22:09.867364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.269 [2024-11-17 23:22:09.867502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:46.269 [2024-11-17 23:22:09.867517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.269 [2024-11-17 23:22:09.867531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.269 [2024-11-17 23:22:09.867610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.269 [2024-11-17 23:22:09.867621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:46.269 [2024-11-17 23:22:09.867630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.269 [2024-11-17 23:22:09.867649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.269 [2024-11-17 23:22:09.867706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.269 [2024-11-17 23:22:09.867720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:46.269 [2024-11-17 23:22:09.867728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.269 [2024-11-17 23:22:09.867738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.269 [2024-11-17 23:22:09.867778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.269 [2024-11-17 23:22:09.867789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:46.269 [2024-11-17 23:22:09.867796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.269 [2024-11-17 23:22:09.867806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.269 [2024-11-17 23:22:09.879528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.269 [2024-11-17 23:22:09.879581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:46.269 [2024-11-17 23:22:09.879591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.269 [2024-11-17 23:22:09.879601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.269 [2024-11-17 23:22:09.889233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.269 [2024-11-17 23:22:09.889400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:46.269 [2024-11-17 23:22:09.889416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.269 [2024-11-17 23:22:09.889428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.269 [2024-11-17 23:22:09.889483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.269 [2024-11-17 23:22:09.889495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:46.269 [2024-11-17 23:22:09.889513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.269 [2024-11-17 23:22:09.889526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.269 [2024-11-17 23:22:09.889583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.269 [2024-11-17 23:22:09.889593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:46.269 [2024-11-17 23:22:09.889602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.269 [2024-11-17 23:22:09.889612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.269 [2024-11-17 23:22:09.889721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.269 [2024-11-17 23:22:09.889734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:46.269 [2024-11-17 23:22:09.889742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.269 [2024-11-17 23:22:09.889753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.269 [2024-11-17 23:22:09.889806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.269 [2024-11-17 23:22:09.889817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:46.269 [2024-11-17 23:22:09.889824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.269 [2024-11-17 23:22:09.889836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.269 [2024-11-17 23:22:09.889962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.269 [2024-11-17 23:22:09.889976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:46.269 [2024-11-17 23:22:09.889986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.269 [2024-11-17 23:22:09.889996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.269 [2024-11-17 23:22:09.890065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.269 [2024-11-17 23:22:09.890078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:46.269 [2024-11-17 23:22:09.890087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.269 [2024-11-17 23:22:09.890098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.269 [2024-11-17 23:22:09.890302] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 57.272 ms, result 0 00:16:46.269 true 00:16:46.269 23:22:09 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 84731 00:16:46.269 23:22:09 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 84731 ']' 00:16:46.269 23:22:09 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 84731 00:16:46.269 23:22:09 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:16:46.269 23:22:09 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:46.269 23:22:09 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84731 00:16:46.269 killing process with pid 84731 00:16:46.269 23:22:09 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:46.269 23:22:09 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:46.269 23:22:09 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84731' 00:16:46.269 23:22:09 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 84731 00:16:46.269 23:22:09 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 84731 00:16:51.557 23:22:14 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:16:52.129 65536+0 records in 00:16:52.129 65536+0 records out 00:16:52.129 268435456 bytes (268 MB, 256 MiB) copied, 1.0983 s, 244 MB/s 00:16:52.129 23:22:15 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:52.129 [2024-11-17 23:22:15.923836] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:16:52.129 [2024-11-17 23:22:15.924006] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84908 ] 00:16:52.390 [2024-11-17 23:22:16.070972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.390 [2024-11-17 23:22:16.100267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.652 [2024-11-17 23:22:16.212691] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:52.652 [2024-11-17 23:22:16.212770] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:52.652 [2024-11-17 23:22:16.368867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.652 [2024-11-17 23:22:16.368926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:52.652 [2024-11-17 23:22:16.368940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:52.652 [2024-11-17 23:22:16.368949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.652 [2024-11-17 23:22:16.371191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.652 [2024-11-17 23:22:16.371228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:52.652 [2024-11-17 23:22:16.371237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.224 ms 00:16:52.652 [2024-11-17 23:22:16.371249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.652 [2024-11-17 23:22:16.371413] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:52.652 [2024-11-17 23:22:16.371635] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:52.652 [2024-11-17 23:22:16.371661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.652 [2024-11-17 23:22:16.371671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:52.652 [2024-11-17 23:22:16.371680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:16:52.652 [2024-11-17 23:22:16.371690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.652 [2024-11-17 23:22:16.372791] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:52.652 [2024-11-17 23:22:16.374900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.652 [2024-11-17 23:22:16.374932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:52.652 [2024-11-17 23:22:16.374946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.111 ms 00:16:52.652 [2024-11-17 23:22:16.374954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.652 [2024-11-17 23:22:16.375012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.652 [2024-11-17 23:22:16.375022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:52.652 [2024-11-17 23:22:16.375034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:16:52.652 [2024-11-17 23:22:16.375040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.652 [2024-11-17 23:22:16.379744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.652 [2024-11-17 23:22:16.379774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:52.652 [2024-11-17 23:22:16.379783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.665 ms 00:16:52.652 [2024-11-17 23:22:16.379790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.652 [2024-11-17 23:22:16.379911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.652 [2024-11-17 23:22:16.379923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:52.652 [2024-11-17 23:22:16.379931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:16:52.652 [2024-11-17 23:22:16.379938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.652 [2024-11-17 23:22:16.379973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.652 [2024-11-17 23:22:16.379981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:52.652 [2024-11-17 23:22:16.379990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:52.652 [2024-11-17 23:22:16.379997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.652 [2024-11-17 23:22:16.380019] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:52.652 [2024-11-17 23:22:16.381284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.652 [2024-11-17 23:22:16.381311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:52.652 [2024-11-17 23:22:16.381320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.269 ms 00:16:52.652 [2024-11-17 23:22:16.381326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.652 [2024-11-17 23:22:16.381362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.652 [2024-11-17 23:22:16.381372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:52.652 [2024-11-17 23:22:16.381380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:52.652 [2024-11-17 23:22:16.381386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.652 [2024-11-17 23:22:16.381403] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:52.652 [2024-11-17 23:22:16.381420] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:52.652 [2024-11-17 23:22:16.381457] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:52.652 [2024-11-17 23:22:16.381474] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:52.652 [2024-11-17 23:22:16.381577] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:52.652 [2024-11-17 23:22:16.381587] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:52.652 [2024-11-17 23:22:16.381600] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:52.652 [2024-11-17 23:22:16.381613] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:52.652 [2024-11-17 23:22:16.381621] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:52.652 [2024-11-17 23:22:16.381629] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:52.652 [2024-11-17 23:22:16.381636] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:52.652 [2024-11-17 23:22:16.381643] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:52.652 [2024-11-17 23:22:16.381650] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:52.652 [2024-11-17 23:22:16.381658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.652 [2024-11-17 23:22:16.381667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:52.652 [2024-11-17 23:22:16.381675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:16:52.652 [2024-11-17 23:22:16.381681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.652 [2024-11-17 23:22:16.381770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.652 [2024-11-17 23:22:16.381778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:52.652 [2024-11-17 23:22:16.381785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:16:52.652 [2024-11-17 23:22:16.381792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.652 [2024-11-17 23:22:16.381906] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:52.652 [2024-11-17 23:22:16.381917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:52.652 [2024-11-17 23:22:16.381927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:52.653 [2024-11-17 23:22:16.381939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:52.653 [2024-11-17 23:22:16.381947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:52.653 [2024-11-17 23:22:16.381955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:52.653 [2024-11-17 23:22:16.381963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:52.653 [2024-11-17 23:22:16.381972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:52.653 [2024-11-17 23:22:16.381987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:52.653 [2024-11-17 23:22:16.381995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:52.653 [2024-11-17 23:22:16.382002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:52.653 [2024-11-17 23:22:16.382010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:52.653 [2024-11-17 23:22:16.382017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:52.653 [2024-11-17 23:22:16.382024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:52.653 [2024-11-17 23:22:16.382032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:52.653 [2024-11-17 23:22:16.382039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:52.653 [2024-11-17 23:22:16.382046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:52.653 [2024-11-17 23:22:16.382054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:52.653 [2024-11-17 23:22:16.382061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:52.653 [2024-11-17 23:22:16.382069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:52.653 [2024-11-17 23:22:16.382076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:52.653 [2024-11-17 23:22:16.382083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:52.653 [2024-11-17 23:22:16.382091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:52.653 [2024-11-17 23:22:16.382099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:52.653 [2024-11-17 23:22:16.382111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:52.653 [2024-11-17 23:22:16.382119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:52.653 [2024-11-17 23:22:16.382126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:52.653 [2024-11-17 23:22:16.382133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:52.653 [2024-11-17 23:22:16.382140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:52.653 [2024-11-17 23:22:16.382148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:52.653 [2024-11-17 23:22:16.382155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:52.653 [2024-11-17 23:22:16.382163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:52.653 [2024-11-17 23:22:16.382170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:52.653 [2024-11-17 23:22:16.382177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:52.653 [2024-11-17 23:22:16.382185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:52.653 [2024-11-17 23:22:16.382192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:52.653 [2024-11-17 23:22:16.382199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:52.653 [2024-11-17 23:22:16.382206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:52.653 [2024-11-17 23:22:16.382214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:52.653 [2024-11-17 23:22:16.382221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:52.653 [2024-11-17 23:22:16.382231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:52.653 [2024-11-17 23:22:16.382239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:52.653 [2024-11-17 23:22:16.382246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:52.653 [2024-11-17 23:22:16.382253] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:52.653 [2024-11-17 23:22:16.382261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:52.653 [2024-11-17 23:22:16.382270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:52.653 [2024-11-17 23:22:16.382278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:52.653 [2024-11-17 23:22:16.382289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:52.653 [2024-11-17 23:22:16.382296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:52.653 [2024-11-17 23:22:16.382304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:52.653 [2024-11-17 23:22:16.382311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:52.653 [2024-11-17 23:22:16.382317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:52.653 [2024-11-17 23:22:16.382324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:52.653 [2024-11-17 23:22:16.382331] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:52.653 [2024-11-17 23:22:16.382341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:52.653 [2024-11-17 23:22:16.382349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:52.653 [2024-11-17 23:22:16.382358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:52.653 [2024-11-17 23:22:16.382365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:52.653 [2024-11-17 23:22:16.382372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:52.653 [2024-11-17 23:22:16.382379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:52.653 [2024-11-17 23:22:16.382387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:52.653 [2024-11-17 23:22:16.382394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:52.653 [2024-11-17 23:22:16.382401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:52.653 [2024-11-17 23:22:16.382408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:52.653 [2024-11-17 23:22:16.382419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:52.653 [2024-11-17 23:22:16.382426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:52.653 [2024-11-17 23:22:16.382433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:52.653 [2024-11-17 23:22:16.382440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:52.653 [2024-11-17 23:22:16.382447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:52.653 [2024-11-17 23:22:16.382454] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:52.653 [2024-11-17 23:22:16.382462] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:52.653 [2024-11-17 23:22:16.382471] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:52.653 [2024-11-17 23:22:16.382480] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:52.653 [2024-11-17 23:22:16.382488] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:52.653 [2024-11-17 23:22:16.382495] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:52.653 [2024-11-17 23:22:16.382502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.653 [2024-11-17 23:22:16.382509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:52.653 [2024-11-17 23:22:16.382516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.682 ms 00:16:52.654 [2024-11-17 23:22:16.382523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.654 [2024-11-17 23:22:16.391311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.654 [2024-11-17 23:22:16.391343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:52.654 [2024-11-17 23:22:16.391353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.742 ms 00:16:52.654 [2024-11-17 23:22:16.391361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.654 [2024-11-17 23:22:16.391474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.654 [2024-11-17 23:22:16.391483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:52.654 [2024-11-17 23:22:16.391494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:16:52.654 [2024-11-17 23:22:16.391501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.654 [2024-11-17 23:22:16.421541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.654 [2024-11-17 23:22:16.421690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:52.654 [2024-11-17 23:22:16.421710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.019 ms 00:16:52.654 [2024-11-17 23:22:16.421719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.654 [2024-11-17 23:22:16.421803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.654 [2024-11-17 23:22:16.421818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:52.654 [2024-11-17 23:22:16.421833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:52.654 [2024-11-17 23:22:16.421841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.654 [2024-11-17 23:22:16.422194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.654 [2024-11-17 23:22:16.422220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:52.654 [2024-11-17 23:22:16.422230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:16:52.654 [2024-11-17 23:22:16.422239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.654 [2024-11-17 23:22:16.422384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.654 [2024-11-17 23:22:16.422400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:52.654 [2024-11-17 23:22:16.422413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:16:52.654 [2024-11-17 23:22:16.422422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.654 [2024-11-17 23:22:16.428010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.654 [2024-11-17 23:22:16.428139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:52.654 [2024-11-17 23:22:16.428155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.561 ms 00:16:52.654 [2024-11-17 23:22:16.428167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.654 [2024-11-17 23:22:16.430558] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:16:52.654 [2024-11-17 23:22:16.430599] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:52.654 [2024-11-17 23:22:16.430611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.654 [2024-11-17 23:22:16.430620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:52.654 [2024-11-17 23:22:16.430629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.352 ms 00:16:52.654 [2024-11-17 23:22:16.430636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.654 [2024-11-17 23:22:16.445130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.654 [2024-11-17 23:22:16.445162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:52.654 [2024-11-17 23:22:16.445173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.448 ms 00:16:52.654 [2024-11-17 23:22:16.445180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.654 [2024-11-17 23:22:16.447113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.654 [2024-11-17 23:22:16.447226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:52.654 [2024-11-17 23:22:16.447239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.862 ms 00:16:52.654 [2024-11-17 23:22:16.447246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.654 [2024-11-17 23:22:16.448582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.654 [2024-11-17 23:22:16.448614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:52.654 [2024-11-17 23:22:16.448629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.302 ms 00:16:52.654 [2024-11-17 23:22:16.448636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.654 [2024-11-17 23:22:16.448956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.654 [2024-11-17 23:22:16.448971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:52.654 [2024-11-17 23:22:16.448982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:16:52.654 [2024-11-17 23:22:16.448988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.654 [2024-11-17 23:22:16.464356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.654 [2024-11-17 23:22:16.464402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:52.654 [2024-11-17 23:22:16.464413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.347 ms 00:16:52.654 [2024-11-17 23:22:16.464421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.914 [2024-11-17 23:22:16.471812] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:52.914 [2024-11-17 23:22:16.485818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.914 [2024-11-17 23:22:16.485857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:52.914 [2024-11-17 23:22:16.485868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.314 ms 00:16:52.914 [2024-11-17 23:22:16.485876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.914 [2024-11-17 23:22:16.485969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.914 [2024-11-17 23:22:16.485980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:52.914 [2024-11-17 23:22:16.485988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:52.914 [2024-11-17 23:22:16.485996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.914 [2024-11-17 23:22:16.486054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.914 [2024-11-17 23:22:16.486063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:52.914 [2024-11-17 23:22:16.486071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:16:52.914 [2024-11-17 23:22:16.486078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.914 [2024-11-17 23:22:16.486102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.914 [2024-11-17 23:22:16.486110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:52.914 [2024-11-17 23:22:16.486117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:52.914 [2024-11-17 23:22:16.486124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.914 [2024-11-17 23:22:16.486152] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:52.914 [2024-11-17 23:22:16.486163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.914 [2024-11-17 23:22:16.486174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:52.914 [2024-11-17 23:22:16.486182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:52.914 [2024-11-17 23:22:16.486189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.914 [2024-11-17 23:22:16.489549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.914 [2024-11-17 23:22:16.489729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:52.914 [2024-11-17 23:22:16.489745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.339 ms 00:16:52.914 [2024-11-17 23:22:16.489753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.915 [2024-11-17 23:22:16.489827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.915 [2024-11-17 23:22:16.489839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:52.915 [2024-11-17 23:22:16.489848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:16:52.915 [2024-11-17 23:22:16.489856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.915 [2024-11-17 23:22:16.491082] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:52.915 [2024-11-17 23:22:16.492117] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 121.935 ms, result 0 00:16:52.915 [2024-11-17 23:22:16.492775] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:52.915 [2024-11-17 23:22:16.502482] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:53.852  [2024-11-17T23:22:18.605Z] Copying: 31/256 [MB] (31 MBps) [2024-11-17T23:22:19.542Z] Copying: 56/256 [MB] (24 MBps) [2024-11-17T23:22:20.926Z] Copying: 81/256 [MB] (25 MBps) [2024-11-17T23:22:21.870Z] Copying: 98/256 [MB] (16 MBps) [2024-11-17T23:22:22.814Z] Copying: 116/256 [MB] (18 MBps) [2024-11-17T23:22:23.766Z] Copying: 142/256 [MB] (25 MBps) [2024-11-17T23:22:24.711Z] Copying: 176/256 [MB] (34 MBps) [2024-11-17T23:22:25.654Z] Copying: 206/256 [MB] (29 MBps) [2024-11-17T23:22:26.598Z] Copying: 225/256 [MB] (18 MBps) [2024-11-17T23:22:26.860Z] Copying: 251/256 [MB] (26 MBps) [2024-11-17T23:22:26.860Z] Copying: 256/256 [MB] (average 24 MBps)[2024-11-17 23:22:26.789583] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:03.039 [2024-11-17 23:22:26.791616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.039 [2024-11-17 23:22:26.791707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:03.039 [2024-11-17 23:22:26.791732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:03.039 [2024-11-17 23:22:26.791743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.039 [2024-11-17 23:22:26.791768] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:03.039 [2024-11-17 23:22:26.792523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.039 [2024-11-17 23:22:26.792562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:03.039 [2024-11-17 23:22:26.792574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.739 ms 00:17:03.039 [2024-11-17 23:22:26.792584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.039 [2024-11-17 23:22:26.794944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.039 [2024-11-17 23:22:26.794992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:03.039 [2024-11-17 23:22:26.795013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.329 ms 00:17:03.039 [2024-11-17 23:22:26.795022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.039 [2024-11-17 23:22:26.802302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.039 [2024-11-17 23:22:26.802362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:03.039 [2024-11-17 23:22:26.802374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.253 ms 00:17:03.039 [2024-11-17 23:22:26.802382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.039 [2024-11-17 23:22:26.809360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.039 [2024-11-17 23:22:26.809581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:03.039 [2024-11-17 23:22:26.809603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.928 ms 00:17:03.039 [2024-11-17 23:22:26.809611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.039 [2024-11-17 23:22:26.811704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.039 [2024-11-17 23:22:26.811753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:03.039 [2024-11-17 23:22:26.811763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.021 ms 00:17:03.039 [2024-11-17 23:22:26.811770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.039 [2024-11-17 23:22:26.816124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.039 [2024-11-17 23:22:26.816321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:03.039 [2024-11-17 23:22:26.816348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.306 ms 00:17:03.039 [2024-11-17 23:22:26.816356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.039 [2024-11-17 23:22:26.816490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.039 [2024-11-17 23:22:26.816500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:03.039 [2024-11-17 23:22:26.816509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:17:03.039 [2024-11-17 23:22:26.816516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.039 [2024-11-17 23:22:26.819463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.039 [2024-11-17 23:22:26.819970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:03.039 [2024-11-17 23:22:26.820210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.917 ms 00:17:03.039 [2024-11-17 23:22:26.820411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.039 [2024-11-17 23:22:26.823150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.039 [2024-11-17 23:22:26.823432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:03.039 [2024-11-17 23:22:26.823585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.508 ms 00:17:03.039 [2024-11-17 23:22:26.823676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.039 [2024-11-17 23:22:26.825944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.039 [2024-11-17 23:22:26.826219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:03.039 [2024-11-17 23:22:26.826366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.142 ms 00:17:03.039 [2024-11-17 23:22:26.826394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.039 [2024-11-17 23:22:26.828487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.039 [2024-11-17 23:22:26.828570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:03.039 [2024-11-17 23:22:26.828596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.921 ms 00:17:03.039 [2024-11-17 23:22:26.828616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.039 [2024-11-17 23:22:26.828698] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:03.039 [2024-11-17 23:22:26.828740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.828766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.828788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.828809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.828830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.828851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.828872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.828935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.828957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.828977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.828998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.829991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.830012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.830034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.830055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.830075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.830096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.830117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.830137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.830158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.830178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.830198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.830219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.830240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.830260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.830281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.830301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.830323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.830343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.830364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.830384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.830404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.830425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.830446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.830466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.830486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.830507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.830528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.830549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.830570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.830590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.830610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:03.040 [2024-11-17 23:22:26.830632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:03.041 [2024-11-17 23:22:26.830653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:03.041 [2024-11-17 23:22:26.830673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:03.041 [2024-11-17 23:22:26.830698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:03.041 [2024-11-17 23:22:26.830729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:03.041 [2024-11-17 23:22:26.830750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:03.041 [2024-11-17 23:22:26.830771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:03.041 [2024-11-17 23:22:26.830792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:03.041 [2024-11-17 23:22:26.830813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:03.041 [2024-11-17 23:22:26.830833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:03.041 [2024-11-17 23:22:26.830854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:03.041 [2024-11-17 23:22:26.830875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:03.041 [2024-11-17 23:22:26.830917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:03.041 [2024-11-17 23:22:26.830938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:03.041 [2024-11-17 23:22:26.830982] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:03.041 [2024-11-17 23:22:26.831002] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: af926b10-56ee-4fe0-b2b0-4ca61c941c28 00:17:03.041 [2024-11-17 23:22:26.831023] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:03.041 [2024-11-17 23:22:26.831043] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:03.041 [2024-11-17 23:22:26.831062] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:03.041 [2024-11-17 23:22:26.831082] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:03.041 [2024-11-17 23:22:26.831109] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:03.041 [2024-11-17 23:22:26.831131] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:03.041 [2024-11-17 23:22:26.831150] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:03.041 [2024-11-17 23:22:26.831168] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:03.041 [2024-11-17 23:22:26.831186] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:03.041 [2024-11-17 23:22:26.831205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.041 [2024-11-17 23:22:26.831225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:03.041 [2024-11-17 23:22:26.831263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.510 ms 00:17:03.041 [2024-11-17 23:22:26.831283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.041 [2024-11-17 23:22:26.833808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.041 [2024-11-17 23:22:26.833840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:03.041 [2024-11-17 23:22:26.833850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.450 ms 00:17:03.041 [2024-11-17 23:22:26.833858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.041 [2024-11-17 23:22:26.834000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.041 [2024-11-17 23:22:26.834010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:03.041 [2024-11-17 23:22:26.834019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:17:03.041 [2024-11-17 23:22:26.834027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.041 [2024-11-17 23:22:26.842025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.041 [2024-11-17 23:22:26.842078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:03.041 [2024-11-17 23:22:26.842090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.041 [2024-11-17 23:22:26.842098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.041 [2024-11-17 23:22:26.842187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.041 [2024-11-17 23:22:26.842196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:03.041 [2024-11-17 23:22:26.842204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.041 [2024-11-17 23:22:26.842212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.041 [2024-11-17 23:22:26.842262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.041 [2024-11-17 23:22:26.842271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:03.041 [2024-11-17 23:22:26.842280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.041 [2024-11-17 23:22:26.842288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.041 [2024-11-17 23:22:26.842305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.041 [2024-11-17 23:22:26.842320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:03.041 [2024-11-17 23:22:26.842330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.041 [2024-11-17 23:22:26.842341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.303 [2024-11-17 23:22:26.856865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.303 [2024-11-17 23:22:26.856988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:03.303 [2024-11-17 23:22:26.857001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.303 [2024-11-17 23:22:26.857009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.303 [2024-11-17 23:22:26.866957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.303 [2024-11-17 23:22:26.867139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:03.303 [2024-11-17 23:22:26.867158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.303 [2024-11-17 23:22:26.867166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.303 [2024-11-17 23:22:26.867250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.303 [2024-11-17 23:22:26.867261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:03.303 [2024-11-17 23:22:26.867270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.303 [2024-11-17 23:22:26.867278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.303 [2024-11-17 23:22:26.867310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.303 [2024-11-17 23:22:26.867320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:03.303 [2024-11-17 23:22:26.867328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.303 [2024-11-17 23:22:26.867338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.303 [2024-11-17 23:22:26.867422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.303 [2024-11-17 23:22:26.867431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:03.303 [2024-11-17 23:22:26.867440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.303 [2024-11-17 23:22:26.867448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.303 [2024-11-17 23:22:26.867483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.303 [2024-11-17 23:22:26.867499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:03.303 [2024-11-17 23:22:26.867508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.303 [2024-11-17 23:22:26.867518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.303 [2024-11-17 23:22:26.867566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.303 [2024-11-17 23:22:26.867575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:03.303 [2024-11-17 23:22:26.867584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.303 [2024-11-17 23:22:26.867592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.303 [2024-11-17 23:22:26.867640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:03.303 [2024-11-17 23:22:26.867676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:03.303 [2024-11-17 23:22:26.867690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:03.303 [2024-11-17 23:22:26.867708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.303 [2024-11-17 23:22:26.867860] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 76.215 ms, result 0 00:17:03.303 00:17:03.303 00:17:03.303 23:22:27 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=85030 00:17:03.303 23:22:27 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 85030 00:17:03.303 23:22:27 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 85030 ']' 00:17:03.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.303 23:22:27 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.303 23:22:27 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:03.303 23:22:27 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.303 23:22:27 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:03.303 23:22:27 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:03.303 23:22:27 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:03.566 [2024-11-17 23:22:27.170356] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:17:03.566 [2024-11-17 23:22:27.170922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85030 ] 00:17:03.566 [2024-11-17 23:22:27.315977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.566 [2024-11-17 23:22:27.352439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.516 23:22:28 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:04.516 23:22:28 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:17:04.516 23:22:28 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:04.516 [2024-11-17 23:22:28.250559] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:04.516 [2024-11-17 23:22:28.250657] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:04.780 [2024-11-17 23:22:28.434090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.780 [2024-11-17 23:22:28.434438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:04.780 [2024-11-17 23:22:28.434467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:04.780 [2024-11-17 23:22:28.434478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.780 [2024-11-17 23:22:28.437343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.780 [2024-11-17 23:22:28.437403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:04.780 [2024-11-17 23:22:28.437415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.827 ms 00:17:04.780 [2024-11-17 23:22:28.437425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.780 [2024-11-17 23:22:28.437567] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:04.780 [2024-11-17 23:22:28.437864] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:04.780 [2024-11-17 23:22:28.438129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.780 [2024-11-17 23:22:28.438194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:04.780 [2024-11-17 23:22:28.438618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:17:04.780 [2024-11-17 23:22:28.438655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.780 [2024-11-17 23:22:28.441097] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:04.780 [2024-11-17 23:22:28.445996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.780 [2024-11-17 23:22:28.446057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:04.780 [2024-11-17 23:22:28.446072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.895 ms 00:17:04.780 [2024-11-17 23:22:28.446084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.780 [2024-11-17 23:22:28.446176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.780 [2024-11-17 23:22:28.446188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:04.780 [2024-11-17 23:22:28.446203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:04.780 [2024-11-17 23:22:28.446211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.780 [2024-11-17 23:22:28.457835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.780 [2024-11-17 23:22:28.458088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:04.780 [2024-11-17 23:22:28.458114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.560 ms 00:17:04.780 [2024-11-17 23:22:28.458124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.780 [2024-11-17 23:22:28.458289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.780 [2024-11-17 23:22:28.458303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:04.780 [2024-11-17 23:22:28.458322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:17:04.780 [2024-11-17 23:22:28.458333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.780 [2024-11-17 23:22:28.458366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.780 [2024-11-17 23:22:28.458376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:04.780 [2024-11-17 23:22:28.458391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:04.780 [2024-11-17 23:22:28.458400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.780 [2024-11-17 23:22:28.458430] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:04.780 [2024-11-17 23:22:28.461177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.780 [2024-11-17 23:22:28.461362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:04.780 [2024-11-17 23:22:28.461380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.759 ms 00:17:04.780 [2024-11-17 23:22:28.461394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.780 [2024-11-17 23:22:28.461450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.780 [2024-11-17 23:22:28.461462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:04.780 [2024-11-17 23:22:28.461471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:04.780 [2024-11-17 23:22:28.461481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.780 [2024-11-17 23:22:28.461506] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:04.780 [2024-11-17 23:22:28.461535] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:04.780 [2024-11-17 23:22:28.461576] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:04.780 [2024-11-17 23:22:28.461598] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:04.780 [2024-11-17 23:22:28.461712] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:04.780 [2024-11-17 23:22:28.461728] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:04.780 [2024-11-17 23:22:28.461740] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:04.780 [2024-11-17 23:22:28.461753] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:04.780 [2024-11-17 23:22:28.461763] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:04.780 [2024-11-17 23:22:28.461776] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:04.780 [2024-11-17 23:22:28.461785] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:04.780 [2024-11-17 23:22:28.461797] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:04.780 [2024-11-17 23:22:28.461808] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:04.780 [2024-11-17 23:22:28.461820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.780 [2024-11-17 23:22:28.461828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:04.780 [2024-11-17 23:22:28.461839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:17:04.780 [2024-11-17 23:22:28.461848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.780 [2024-11-17 23:22:28.461955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.780 [2024-11-17 23:22:28.461967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:04.780 [2024-11-17 23:22:28.461978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:17:04.780 [2024-11-17 23:22:28.461987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.780 [2024-11-17 23:22:28.462100] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:04.780 [2024-11-17 23:22:28.462115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:04.780 [2024-11-17 23:22:28.462127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:04.780 [2024-11-17 23:22:28.462138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:04.780 [2024-11-17 23:22:28.462152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:04.780 [2024-11-17 23:22:28.462160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:04.780 [2024-11-17 23:22:28.462170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:04.780 [2024-11-17 23:22:28.462178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:04.780 [2024-11-17 23:22:28.462193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:04.780 [2024-11-17 23:22:28.462201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:04.780 [2024-11-17 23:22:28.462212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:04.780 [2024-11-17 23:22:28.462220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:04.781 [2024-11-17 23:22:28.462231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:04.781 [2024-11-17 23:22:28.462239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:04.781 [2024-11-17 23:22:28.462253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:04.781 [2024-11-17 23:22:28.462261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:04.781 [2024-11-17 23:22:28.462271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:04.781 [2024-11-17 23:22:28.462280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:04.781 [2024-11-17 23:22:28.462290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:04.781 [2024-11-17 23:22:28.462298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:04.781 [2024-11-17 23:22:28.462310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:04.781 [2024-11-17 23:22:28.462317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:04.781 [2024-11-17 23:22:28.462327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:04.781 [2024-11-17 23:22:28.462333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:04.781 [2024-11-17 23:22:28.462343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:04.781 [2024-11-17 23:22:28.462350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:04.781 [2024-11-17 23:22:28.462359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:04.781 [2024-11-17 23:22:28.462367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:04.781 [2024-11-17 23:22:28.462375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:04.781 [2024-11-17 23:22:28.462382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:04.781 [2024-11-17 23:22:28.462391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:04.781 [2024-11-17 23:22:28.462397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:04.781 [2024-11-17 23:22:28.462406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:04.781 [2024-11-17 23:22:28.462412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:04.781 [2024-11-17 23:22:28.462423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:04.781 [2024-11-17 23:22:28.462430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:04.781 [2024-11-17 23:22:28.462441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:04.781 [2024-11-17 23:22:28.462447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:04.781 [2024-11-17 23:22:28.462456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:04.781 [2024-11-17 23:22:28.462463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:04.781 [2024-11-17 23:22:28.462475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:04.781 [2024-11-17 23:22:28.462482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:04.781 [2024-11-17 23:22:28.462491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:04.781 [2024-11-17 23:22:28.462499] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:04.781 [2024-11-17 23:22:28.462509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:04.781 [2024-11-17 23:22:28.462518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:04.781 [2024-11-17 23:22:28.462532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:04.781 [2024-11-17 23:22:28.462545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:04.781 [2024-11-17 23:22:28.462554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:04.781 [2024-11-17 23:22:28.462562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:04.781 [2024-11-17 23:22:28.462574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:04.781 [2024-11-17 23:22:28.462582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:04.781 [2024-11-17 23:22:28.462593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:04.781 [2024-11-17 23:22:28.462602] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:04.781 [2024-11-17 23:22:28.462613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:04.781 [2024-11-17 23:22:28.462626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:04.781 [2024-11-17 23:22:28.462637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:04.781 [2024-11-17 23:22:28.462644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:04.781 [2024-11-17 23:22:28.462654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:04.781 [2024-11-17 23:22:28.462661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:04.781 [2024-11-17 23:22:28.462671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:04.781 [2024-11-17 23:22:28.462679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:04.781 [2024-11-17 23:22:28.462690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:04.781 [2024-11-17 23:22:28.462697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:04.781 [2024-11-17 23:22:28.462707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:04.781 [2024-11-17 23:22:28.462713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:04.781 [2024-11-17 23:22:28.462722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:04.781 [2024-11-17 23:22:28.462729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:04.781 [2024-11-17 23:22:28.462749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:04.781 [2024-11-17 23:22:28.462757] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:04.781 [2024-11-17 23:22:28.462769] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:04.781 [2024-11-17 23:22:28.462781] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:04.781 [2024-11-17 23:22:28.462791] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:04.781 [2024-11-17 23:22:28.462798] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:04.781 [2024-11-17 23:22:28.462807] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:04.781 [2024-11-17 23:22:28.462815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.781 [2024-11-17 23:22:28.462824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:04.781 [2024-11-17 23:22:28.462832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.788 ms 00:17:04.781 [2024-11-17 23:22:28.462843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.781 [2024-11-17 23:22:28.483329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.781 [2024-11-17 23:22:28.483381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:04.781 [2024-11-17 23:22:28.483393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.372 ms 00:17:04.781 [2024-11-17 23:22:28.483406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.781 [2024-11-17 23:22:28.483550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.781 [2024-11-17 23:22:28.483570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:04.781 [2024-11-17 23:22:28.483579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:17:04.781 [2024-11-17 23:22:28.483589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.781 [2024-11-17 23:22:28.500773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.781 [2024-11-17 23:22:28.500826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:04.781 [2024-11-17 23:22:28.500838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.159 ms 00:17:04.781 [2024-11-17 23:22:28.500850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.781 [2024-11-17 23:22:28.500959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.781 [2024-11-17 23:22:28.500974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:04.781 [2024-11-17 23:22:28.500988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:04.781 [2024-11-17 23:22:28.500999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.781 [2024-11-17 23:22:28.501662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.781 [2024-11-17 23:22:28.501716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:04.781 [2024-11-17 23:22:28.501729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.636 ms 00:17:04.781 [2024-11-17 23:22:28.501742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.781 [2024-11-17 23:22:28.501929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.781 [2024-11-17 23:22:28.501955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:04.781 [2024-11-17 23:22:28.501964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:17:04.781 [2024-11-17 23:22:28.501975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.781 [2024-11-17 23:22:28.513352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.781 [2024-11-17 23:22:28.513405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:04.781 [2024-11-17 23:22:28.513417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.348 ms 00:17:04.781 [2024-11-17 23:22:28.513428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.781 [2024-11-17 23:22:28.518359] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:04.781 [2024-11-17 23:22:28.518416] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:04.781 [2024-11-17 23:22:28.518430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.781 [2024-11-17 23:22:28.518443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:04.782 [2024-11-17 23:22:28.518453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.865 ms 00:17:04.782 [2024-11-17 23:22:28.518465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.782 [2024-11-17 23:22:28.534975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.782 [2024-11-17 23:22:28.535032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:04.782 [2024-11-17 23:22:28.535047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.423 ms 00:17:04.782 [2024-11-17 23:22:28.535062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.782 [2024-11-17 23:22:28.538422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.782 [2024-11-17 23:22:28.538623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:04.782 [2024-11-17 23:22:28.538642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.234 ms 00:17:04.782 [2024-11-17 23:22:28.538653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.782 [2024-11-17 23:22:28.541574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.782 [2024-11-17 23:22:28.541627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:04.782 [2024-11-17 23:22:28.541637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.869 ms 00:17:04.782 [2024-11-17 23:22:28.541647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.782 [2024-11-17 23:22:28.542212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.782 [2024-11-17 23:22:28.542322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:04.782 [2024-11-17 23:22:28.542385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.473 ms 00:17:04.782 [2024-11-17 23:22:28.542413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.782 [2024-11-17 23:22:28.575684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.782 [2024-11-17 23:22:28.575866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:04.782 [2024-11-17 23:22:28.575910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.180 ms 00:17:04.782 [2024-11-17 23:22:28.575925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.782 [2024-11-17 23:22:28.584312] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:05.044 [2024-11-17 23:22:28.604415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.044 [2024-11-17 23:22:28.604468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:05.044 [2024-11-17 23:22:28.604484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.309 ms 00:17:05.044 [2024-11-17 23:22:28.604493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.044 [2024-11-17 23:22:28.604595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.044 [2024-11-17 23:22:28.604612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:05.044 [2024-11-17 23:22:28.604628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:05.044 [2024-11-17 23:22:28.604636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.044 [2024-11-17 23:22:28.604701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.044 [2024-11-17 23:22:28.604715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:05.044 [2024-11-17 23:22:28.604726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:17:05.044 [2024-11-17 23:22:28.604734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.044 [2024-11-17 23:22:28.604762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.044 [2024-11-17 23:22:28.604771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:05.044 [2024-11-17 23:22:28.604788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:05.044 [2024-11-17 23:22:28.604798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.044 [2024-11-17 23:22:28.604838] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:05.044 [2024-11-17 23:22:28.604849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.044 [2024-11-17 23:22:28.604859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:05.044 [2024-11-17 23:22:28.604867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:05.044 [2024-11-17 23:22:28.604908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.044 [2024-11-17 23:22:28.610398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.044 [2024-11-17 23:22:28.610447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:05.044 [2024-11-17 23:22:28.610458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.463 ms 00:17:05.044 [2024-11-17 23:22:28.610471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.044 [2024-11-17 23:22:28.610561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.044 [2024-11-17 23:22:28.610573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:05.044 [2024-11-17 23:22:28.610582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:17:05.044 [2024-11-17 23:22:28.610593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.044 [2024-11-17 23:22:28.611690] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:05.044 [2024-11-17 23:22:28.612940] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 177.242 ms, result 0 00:17:05.044 [2024-11-17 23:22:28.614647] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:05.044 Some configs were skipped because the RPC state that can call them passed over. 00:17:05.044 23:22:28 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:05.044 [2024-11-17 23:22:28.856821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.044 [2024-11-17 23:22:28.857076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:05.044 [2024-11-17 23:22:28.857158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.204 ms 00:17:05.044 [2024-11-17 23:22:28.857186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.044 [2024-11-17 23:22:28.857252] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.644 ms, result 0 00:17:05.044 true 00:17:05.306 23:22:28 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:05.307 [2024-11-17 23:22:29.084516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.307 [2024-11-17 23:22:29.084708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:05.307 [2024-11-17 23:22:29.084730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.687 ms 00:17:05.307 [2024-11-17 23:22:29.084741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.307 [2024-11-17 23:22:29.084786] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.956 ms, result 0 00:17:05.307 true 00:17:05.307 23:22:29 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 85030 00:17:05.307 23:22:29 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 85030 ']' 00:17:05.307 23:22:29 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 85030 00:17:05.307 23:22:29 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:17:05.307 23:22:29 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:05.307 23:22:29 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85030 00:17:05.569 killing process with pid 85030 00:17:05.569 23:22:29 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:05.569 23:22:29 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:05.569 23:22:29 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85030' 00:17:05.569 23:22:29 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 85030 00:17:05.569 23:22:29 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 85030 00:17:05.569 [2024-11-17 23:22:29.331419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.569 [2024-11-17 23:22:29.331496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:05.569 [2024-11-17 23:22:29.331515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:05.569 [2024-11-17 23:22:29.331525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.569 [2024-11-17 23:22:29.331574] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:05.569 [2024-11-17 23:22:29.332477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.569 [2024-11-17 23:22:29.332515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:05.569 [2024-11-17 23:22:29.332531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.884 ms 00:17:05.569 [2024-11-17 23:22:29.332545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.569 [2024-11-17 23:22:29.332859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.569 [2024-11-17 23:22:29.332872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:05.569 [2024-11-17 23:22:29.332901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:17:05.569 [2024-11-17 23:22:29.332912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.569 [2024-11-17 23:22:29.337398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.569 [2024-11-17 23:22:29.337444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:05.569 [2024-11-17 23:22:29.337455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.465 ms 00:17:05.569 [2024-11-17 23:22:29.337467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.569 [2024-11-17 23:22:29.344622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.569 [2024-11-17 23:22:29.344673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:05.569 [2024-11-17 23:22:29.344687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.106 ms 00:17:05.569 [2024-11-17 23:22:29.344704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.569 [2024-11-17 23:22:29.347924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.569 [2024-11-17 23:22:29.347979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:05.569 [2024-11-17 23:22:29.347991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.110 ms 00:17:05.569 [2024-11-17 23:22:29.348002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.569 [2024-11-17 23:22:29.354440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.569 [2024-11-17 23:22:29.354498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:05.569 [2024-11-17 23:22:29.354510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.385 ms 00:17:05.569 [2024-11-17 23:22:29.354525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.569 [2024-11-17 23:22:29.354676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.569 [2024-11-17 23:22:29.354691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:05.569 [2024-11-17 23:22:29.354701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:17:05.569 [2024-11-17 23:22:29.354710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.569 [2024-11-17 23:22:29.358077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.569 [2024-11-17 23:22:29.358286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:05.569 [2024-11-17 23:22:29.358305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.345 ms 00:17:05.569 [2024-11-17 23:22:29.358321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.569 [2024-11-17 23:22:29.360908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.569 [2024-11-17 23:22:29.360960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:05.569 [2024-11-17 23:22:29.360970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.537 ms 00:17:05.569 [2024-11-17 23:22:29.360980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.569 [2024-11-17 23:22:29.363325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.569 [2024-11-17 23:22:29.363384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:05.569 [2024-11-17 23:22:29.363395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.293 ms 00:17:05.569 [2024-11-17 23:22:29.363405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.569 [2024-11-17 23:22:29.365817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.569 [2024-11-17 23:22:29.365873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:05.569 [2024-11-17 23:22:29.365903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.329 ms 00:17:05.569 [2024-11-17 23:22:29.365914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.569 [2024-11-17 23:22:29.365961] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:05.569 [2024-11-17 23:22:29.365981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:05.569 [2024-11-17 23:22:29.365992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:05.569 [2024-11-17 23:22:29.366006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:05.569 [2024-11-17 23:22:29.366014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:05.569 [2024-11-17 23:22:29.366026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:05.569 [2024-11-17 23:22:29.366035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:05.569 [2024-11-17 23:22:29.366047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:05.569 [2024-11-17 23:22:29.366056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:05.569 [2024-11-17 23:22:29.366068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:05.569 [2024-11-17 23:22:29.366076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:05.569 [2024-11-17 23:22:29.366086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:05.569 [2024-11-17 23:22:29.366094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:05.569 [2024-11-17 23:22:29.366103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:05.569 [2024-11-17 23:22:29.366111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:05.569 [2024-11-17 23:22:29.366120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:05.569 [2024-11-17 23:22:29.366128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:05.569 [2024-11-17 23:22:29.366150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:05.569 [2024-11-17 23:22:29.366158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:05.569 [2024-11-17 23:22:29.366170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:05.569 [2024-11-17 23:22:29.366178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:05.569 [2024-11-17 23:22:29.366190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:05.569 [2024-11-17 23:22:29.366198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:05.569 [2024-11-17 23:22:29.366208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:05.569 [2024-11-17 23:22:29.366216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:05.569 [2024-11-17 23:22:29.366226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:05.569 [2024-11-17 23:22:29.366234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:05.569 [2024-11-17 23:22:29.366247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:05.570 [2024-11-17 23:22:29.366968] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:05.570 [2024-11-17 23:22:29.366978] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: af926b10-56ee-4fe0-b2b0-4ca61c941c28 00:17:05.570 [2024-11-17 23:22:29.366990] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:05.570 [2024-11-17 23:22:29.367001] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:05.570 [2024-11-17 23:22:29.367012] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:05.570 [2024-11-17 23:22:29.367020] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:05.570 [2024-11-17 23:22:29.367031] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:05.570 [2024-11-17 23:22:29.367039] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:05.570 [2024-11-17 23:22:29.367052] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:05.570 [2024-11-17 23:22:29.367059] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:05.570 [2024-11-17 23:22:29.367068] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:05.570 [2024-11-17 23:22:29.367076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.570 [2024-11-17 23:22:29.367087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:05.570 [2024-11-17 23:22:29.367102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.116 ms 00:17:05.570 [2024-11-17 23:22:29.367121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.570 [2024-11-17 23:22:29.370304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.570 [2024-11-17 23:22:29.370344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:05.570 [2024-11-17 23:22:29.370356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.160 ms 00:17:05.570 [2024-11-17 23:22:29.370367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.570 [2024-11-17 23:22:29.370542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.570 [2024-11-17 23:22:29.370555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:05.571 [2024-11-17 23:22:29.370565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:17:05.571 [2024-11-17 23:22:29.370576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.571 [2024-11-17 23:22:29.381593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.571 [2024-11-17 23:22:29.381646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:05.571 [2024-11-17 23:22:29.381658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.571 [2024-11-17 23:22:29.381669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.571 [2024-11-17 23:22:29.381771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.571 [2024-11-17 23:22:29.381791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:05.571 [2024-11-17 23:22:29.381801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.571 [2024-11-17 23:22:29.381814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.571 [2024-11-17 23:22:29.381876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.571 [2024-11-17 23:22:29.381918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:05.571 [2024-11-17 23:22:29.381927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.571 [2024-11-17 23:22:29.381937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.571 [2024-11-17 23:22:29.381963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.571 [2024-11-17 23:22:29.381974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:05.571 [2024-11-17 23:22:29.381984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.571 [2024-11-17 23:22:29.381996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.831 [2024-11-17 23:22:29.402411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.831 [2024-11-17 23:22:29.402472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:05.831 [2024-11-17 23:22:29.402486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.831 [2024-11-17 23:22:29.402497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.831 [2024-11-17 23:22:29.417500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.831 [2024-11-17 23:22:29.417578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:05.831 [2024-11-17 23:22:29.417591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.831 [2024-11-17 23:22:29.417606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.831 [2024-11-17 23:22:29.417688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.831 [2024-11-17 23:22:29.417706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:05.831 [2024-11-17 23:22:29.417714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.831 [2024-11-17 23:22:29.417726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.831 [2024-11-17 23:22:29.417766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.831 [2024-11-17 23:22:29.417779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:05.831 [2024-11-17 23:22:29.417789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.831 [2024-11-17 23:22:29.417799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.831 [2024-11-17 23:22:29.417908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.831 [2024-11-17 23:22:29.417923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:05.831 [2024-11-17 23:22:29.417941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.831 [2024-11-17 23:22:29.417951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.831 [2024-11-17 23:22:29.417992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.831 [2024-11-17 23:22:29.418005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:05.831 [2024-11-17 23:22:29.418014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.831 [2024-11-17 23:22:29.418028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.831 [2024-11-17 23:22:29.418080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.831 [2024-11-17 23:22:29.418095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:05.831 [2024-11-17 23:22:29.418108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.831 [2024-11-17 23:22:29.418120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.831 [2024-11-17 23:22:29.418182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.831 [2024-11-17 23:22:29.418197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:05.831 [2024-11-17 23:22:29.418207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.831 [2024-11-17 23:22:29.418219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.831 [2024-11-17 23:22:29.418413] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 86.950 ms, result 0 00:17:06.091 23:22:29 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:06.091 23:22:29 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:06.091 [2024-11-17 23:22:29.793968] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:17:06.091 [2024-11-17 23:22:29.794113] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85066 ] 00:17:06.351 [2024-11-17 23:22:29.940325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.351 [2024-11-17 23:22:29.979830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.351 [2024-11-17 23:22:30.128929] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:06.351 [2024-11-17 23:22:30.129301] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:06.614 [2024-11-17 23:22:30.291639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.614 [2024-11-17 23:22:30.291727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:06.614 [2024-11-17 23:22:30.291745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:06.614 [2024-11-17 23:22:30.291755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.614 [2024-11-17 23:22:30.294522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.614 [2024-11-17 23:22:30.294577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:06.614 [2024-11-17 23:22:30.294589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.745 ms 00:17:06.614 [2024-11-17 23:22:30.294606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.614 [2024-11-17 23:22:30.294719] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:06.614 [2024-11-17 23:22:30.295267] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:06.614 [2024-11-17 23:22:30.295344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.614 [2024-11-17 23:22:30.295371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:06.614 [2024-11-17 23:22:30.295394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.641 ms 00:17:06.614 [2024-11-17 23:22:30.295669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.614 [2024-11-17 23:22:30.299311] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:06.614 [2024-11-17 23:22:30.304497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.614 [2024-11-17 23:22:30.304689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:06.614 [2024-11-17 23:22:30.304769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.188 ms 00:17:06.614 [2024-11-17 23:22:30.304798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.614 [2024-11-17 23:22:30.304921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.614 [2024-11-17 23:22:30.305084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:06.614 [2024-11-17 23:22:30.305124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:17:06.614 [2024-11-17 23:22:30.305145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.614 [2024-11-17 23:22:30.316936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.614 [2024-11-17 23:22:30.317095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:06.614 [2024-11-17 23:22:30.317151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.674 ms 00:17:06.614 [2024-11-17 23:22:30.317176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.614 [2024-11-17 23:22:30.317362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.614 [2024-11-17 23:22:30.317461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:06.614 [2024-11-17 23:22:30.317488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:17:06.614 [2024-11-17 23:22:30.317509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.614 [2024-11-17 23:22:30.317590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.614 [2024-11-17 23:22:30.317616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:06.614 [2024-11-17 23:22:30.317638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:06.614 [2024-11-17 23:22:30.317659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.614 [2024-11-17 23:22:30.317749] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:06.614 [2024-11-17 23:22:30.320541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.614 [2024-11-17 23:22:30.320689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:06.614 [2024-11-17 23:22:30.320744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.802 ms 00:17:06.614 [2024-11-17 23:22:30.320770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.614 [2024-11-17 23:22:30.320847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.614 [2024-11-17 23:22:30.320872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:06.614 [2024-11-17 23:22:30.320911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:06.614 [2024-11-17 23:22:30.320975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.614 [2024-11-17 23:22:30.321018] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:06.614 [2024-11-17 23:22:30.321063] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:06.614 [2024-11-17 23:22:30.321135] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:06.614 [2024-11-17 23:22:30.321381] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:06.614 [2024-11-17 23:22:30.321686] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:06.614 [2024-11-17 23:22:30.321770] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:06.614 [2024-11-17 23:22:30.321849] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:06.614 [2024-11-17 23:22:30.321937] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:06.614 [2024-11-17 23:22:30.321972] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:06.615 [2024-11-17 23:22:30.322005] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:06.615 [2024-11-17 23:22:30.322025] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:06.615 [2024-11-17 23:22:30.322066] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:06.615 [2024-11-17 23:22:30.322102] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:06.615 [2024-11-17 23:22:30.322125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.615 [2024-11-17 23:22:30.322314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:06.615 [2024-11-17 23:22:30.322360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.111 ms 00:17:06.615 [2024-11-17 23:22:30.322381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.615 [2024-11-17 23:22:30.322528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.615 [2024-11-17 23:22:30.322554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:06.615 [2024-11-17 23:22:30.322587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:17:06.615 [2024-11-17 23:22:30.322608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.615 [2024-11-17 23:22:30.322726] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:06.615 [2024-11-17 23:22:30.322754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:06.615 [2024-11-17 23:22:30.322769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:06.615 [2024-11-17 23:22:30.322778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:06.615 [2024-11-17 23:22:30.322786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:06.615 [2024-11-17 23:22:30.322794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:06.615 [2024-11-17 23:22:30.322804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:06.615 [2024-11-17 23:22:30.322815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:06.615 [2024-11-17 23:22:30.322822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:06.615 [2024-11-17 23:22:30.322830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:06.615 [2024-11-17 23:22:30.322838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:06.615 [2024-11-17 23:22:30.322846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:06.615 [2024-11-17 23:22:30.322853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:06.615 [2024-11-17 23:22:30.322860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:06.615 [2024-11-17 23:22:30.322867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:06.615 [2024-11-17 23:22:30.322874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:06.615 [2024-11-17 23:22:30.322906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:06.615 [2024-11-17 23:22:30.322914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:06.615 [2024-11-17 23:22:30.322922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:06.615 [2024-11-17 23:22:30.322929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:06.615 [2024-11-17 23:22:30.322936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:06.615 [2024-11-17 23:22:30.322943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:06.615 [2024-11-17 23:22:30.322951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:06.615 [2024-11-17 23:22:30.322965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:06.615 [2024-11-17 23:22:30.322974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:06.615 [2024-11-17 23:22:30.322981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:06.615 [2024-11-17 23:22:30.322989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:06.615 [2024-11-17 23:22:30.322997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:06.615 [2024-11-17 23:22:30.323004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:06.615 [2024-11-17 23:22:30.323011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:06.615 [2024-11-17 23:22:30.323018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:06.615 [2024-11-17 23:22:30.323026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:06.615 [2024-11-17 23:22:30.323033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:06.615 [2024-11-17 23:22:30.323042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:06.615 [2024-11-17 23:22:30.323049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:06.615 [2024-11-17 23:22:30.323055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:06.615 [2024-11-17 23:22:30.323063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:06.615 [2024-11-17 23:22:30.323073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:06.615 [2024-11-17 23:22:30.323081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:06.615 [2024-11-17 23:22:30.323096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:06.615 [2024-11-17 23:22:30.323105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:06.615 [2024-11-17 23:22:30.323112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:06.615 [2024-11-17 23:22:30.323119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:06.615 [2024-11-17 23:22:30.323126] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:06.615 [2024-11-17 23:22:30.323134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:06.615 [2024-11-17 23:22:30.323149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:06.615 [2024-11-17 23:22:30.323157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:06.615 [2024-11-17 23:22:30.323165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:06.615 [2024-11-17 23:22:30.323172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:06.615 [2024-11-17 23:22:30.323179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:06.615 [2024-11-17 23:22:30.323186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:06.615 [2024-11-17 23:22:30.323193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:06.615 [2024-11-17 23:22:30.323201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:06.615 [2024-11-17 23:22:30.323210] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:06.615 [2024-11-17 23:22:30.323221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:06.615 [2024-11-17 23:22:30.323232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:06.615 [2024-11-17 23:22:30.323243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:06.615 [2024-11-17 23:22:30.323251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:06.615 [2024-11-17 23:22:30.323259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:06.615 [2024-11-17 23:22:30.323266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:06.615 [2024-11-17 23:22:30.323274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:06.615 [2024-11-17 23:22:30.323283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:06.615 [2024-11-17 23:22:30.323293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:06.615 [2024-11-17 23:22:30.323302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:06.615 [2024-11-17 23:22:30.323318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:06.615 [2024-11-17 23:22:30.323327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:06.615 [2024-11-17 23:22:30.323338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:06.615 [2024-11-17 23:22:30.323347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:06.615 [2024-11-17 23:22:30.323356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:06.615 [2024-11-17 23:22:30.323368] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:06.615 [2024-11-17 23:22:30.323378] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:06.615 [2024-11-17 23:22:30.323398] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:06.615 [2024-11-17 23:22:30.323407] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:06.615 [2024-11-17 23:22:30.323417] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:06.615 [2024-11-17 23:22:30.323426] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:06.615 [2024-11-17 23:22:30.323436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.615 [2024-11-17 23:22:30.323451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:06.615 [2024-11-17 23:22:30.323461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.781 ms 00:17:06.615 [2024-11-17 23:22:30.323470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.615 [2024-11-17 23:22:30.344084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.615 [2024-11-17 23:22:30.344137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:06.615 [2024-11-17 23:22:30.344151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.557 ms 00:17:06.615 [2024-11-17 23:22:30.344160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.615 [2024-11-17 23:22:30.344302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.615 [2024-11-17 23:22:30.344313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:06.615 [2024-11-17 23:22:30.344329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:06.615 [2024-11-17 23:22:30.344338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.616 [2024-11-17 23:22:30.371052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.616 [2024-11-17 23:22:30.371108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:06.616 [2024-11-17 23:22:30.371122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.688 ms 00:17:06.616 [2024-11-17 23:22:30.371130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.616 [2024-11-17 23:22:30.371231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.616 [2024-11-17 23:22:30.371248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:06.616 [2024-11-17 23:22:30.371258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:06.616 [2024-11-17 23:22:30.371272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.616 [2024-11-17 23:22:30.372062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.616 [2024-11-17 23:22:30.372156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:06.616 [2024-11-17 23:22:30.372171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.763 ms 00:17:06.616 [2024-11-17 23:22:30.372181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.616 [2024-11-17 23:22:30.372373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.616 [2024-11-17 23:22:30.372387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:06.616 [2024-11-17 23:22:30.372403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:17:06.616 [2024-11-17 23:22:30.372413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.616 [2024-11-17 23:22:30.384568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.616 [2024-11-17 23:22:30.384617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:06.616 [2024-11-17 23:22:30.384635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.128 ms 00:17:06.616 [2024-11-17 23:22:30.384644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.616 [2024-11-17 23:22:30.389736] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:06.616 [2024-11-17 23:22:30.389796] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:06.616 [2024-11-17 23:22:30.389811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.616 [2024-11-17 23:22:30.389821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:06.616 [2024-11-17 23:22:30.389831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.985 ms 00:17:06.616 [2024-11-17 23:22:30.389839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.616 [2024-11-17 23:22:30.406710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.616 [2024-11-17 23:22:30.406922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:06.616 [2024-11-17 23:22:30.406945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.761 ms 00:17:06.616 [2024-11-17 23:22:30.406955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.616 [2024-11-17 23:22:30.410145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.616 [2024-11-17 23:22:30.410330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:06.616 [2024-11-17 23:22:30.410351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.900 ms 00:17:06.616 [2024-11-17 23:22:30.410361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.616 [2024-11-17 23:22:30.412925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.616 [2024-11-17 23:22:30.412963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:06.616 [2024-11-17 23:22:30.412974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.499 ms 00:17:06.616 [2024-11-17 23:22:30.412983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.616 [2024-11-17 23:22:30.413349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.616 [2024-11-17 23:22:30.413366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:06.616 [2024-11-17 23:22:30.413377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:17:06.616 [2024-11-17 23:22:30.413386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.877 [2024-11-17 23:22:30.445813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.877 [2024-11-17 23:22:30.445899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:06.877 [2024-11-17 23:22:30.445914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.401 ms 00:17:06.877 [2024-11-17 23:22:30.445924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.877 [2024-11-17 23:22:30.454802] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:06.877 [2024-11-17 23:22:30.480118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.877 [2024-11-17 23:22:30.480351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:06.877 [2024-11-17 23:22:30.480374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.022 ms 00:17:06.877 [2024-11-17 23:22:30.480399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.877 [2024-11-17 23:22:30.480523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.877 [2024-11-17 23:22:30.480538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:06.877 [2024-11-17 23:22:30.480550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:17:06.877 [2024-11-17 23:22:30.480563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.877 [2024-11-17 23:22:30.480636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.877 [2024-11-17 23:22:30.480647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:06.877 [2024-11-17 23:22:30.480657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:06.877 [2024-11-17 23:22:30.480666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.877 [2024-11-17 23:22:30.480702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.877 [2024-11-17 23:22:30.480714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:06.877 [2024-11-17 23:22:30.480723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:06.877 [2024-11-17 23:22:30.480732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.877 [2024-11-17 23:22:30.480781] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:06.877 [2024-11-17 23:22:30.480793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.877 [2024-11-17 23:22:30.480804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:06.877 [2024-11-17 23:22:30.480813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:06.877 [2024-11-17 23:22:30.480821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.877 [2024-11-17 23:22:30.487929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.877 [2024-11-17 23:22:30.487978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:06.877 [2024-11-17 23:22:30.487989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.084 ms 00:17:06.877 [2024-11-17 23:22:30.488000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.877 [2024-11-17 23:22:30.488118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.877 [2024-11-17 23:22:30.488130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:06.877 [2024-11-17 23:22:30.488140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:06.877 [2024-11-17 23:22:30.488148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.877 [2024-11-17 23:22:30.489401] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:06.877 [2024-11-17 23:22:30.490922] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 197.387 ms, result 0 00:17:06.877 [2024-11-17 23:22:30.492240] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:06.877 [2024-11-17 23:22:30.499689] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:07.821  [2024-11-17T23:22:32.593Z] Copying: 18/256 [MB] (18 MBps) [2024-11-17T23:22:33.535Z] Copying: 28/256 [MB] (10 MBps) [2024-11-17T23:22:34.920Z] Copying: 38/256 [MB] (10 MBps) [2024-11-17T23:22:35.864Z] Copying: 49/256 [MB] (10 MBps) [2024-11-17T23:22:36.809Z] Copying: 59/256 [MB] (10 MBps) [2024-11-17T23:22:37.751Z] Copying: 70/256 [MB] (10 MBps) [2024-11-17T23:22:38.698Z] Copying: 83/256 [MB] (12 MBps) [2024-11-17T23:22:39.641Z] Copying: 99/256 [MB] (16 MBps) [2024-11-17T23:22:40.580Z] Copying: 110/256 [MB] (10 MBps) [2024-11-17T23:22:41.524Z] Copying: 121/256 [MB] (10 MBps) [2024-11-17T23:22:42.902Z] Copying: 131/256 [MB] (10 MBps) [2024-11-17T23:22:43.836Z] Copying: 145/256 [MB] (13 MBps) [2024-11-17T23:22:44.772Z] Copying: 160/256 [MB] (15 MBps) [2024-11-17T23:22:45.708Z] Copying: 172/256 [MB] (11 MBps) [2024-11-17T23:22:46.644Z] Copying: 189/256 [MB] (16 MBps) [2024-11-17T23:22:47.597Z] Copying: 212/256 [MB] (23 MBps) [2024-11-17T23:22:48.539Z] Copying: 235/256 [MB] (23 MBps) [2024-11-17T23:22:48.797Z] Copying: 253/256 [MB] (18 MBps) [2024-11-17T23:22:48.797Z] Copying: 256/256 [MB] (average 14 MBps)[2024-11-17 23:22:48.676500] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:24.976 [2024-11-17 23:22:48.677616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.976 [2024-11-17 23:22:48.677646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:24.976 [2024-11-17 23:22:48.677658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:24.976 [2024-11-17 23:22:48.677666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.976 [2024-11-17 23:22:48.677685] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:24.976 [2024-11-17 23:22:48.678120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.976 [2024-11-17 23:22:48.678136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:24.976 [2024-11-17 23:22:48.678145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:17:24.976 [2024-11-17 23:22:48.678153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.976 [2024-11-17 23:22:48.678403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.976 [2024-11-17 23:22:48.678411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:24.976 [2024-11-17 23:22:48.678419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:17:24.976 [2024-11-17 23:22:48.678429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.976 [2024-11-17 23:22:48.682108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.976 [2024-11-17 23:22:48.682122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:24.976 [2024-11-17 23:22:48.682131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.664 ms 00:17:24.976 [2024-11-17 23:22:48.682139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.976 [2024-11-17 23:22:48.689052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.976 [2024-11-17 23:22:48.689077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:24.976 [2024-11-17 23:22:48.689086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.897 ms 00:17:24.976 [2024-11-17 23:22:48.689093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.976 [2024-11-17 23:22:48.691573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.976 [2024-11-17 23:22:48.691704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:24.976 [2024-11-17 23:22:48.691718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.422 ms 00:17:24.976 [2024-11-17 23:22:48.691724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.976 [2024-11-17 23:22:48.695247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.976 [2024-11-17 23:22:48.695284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:24.976 [2024-11-17 23:22:48.695293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.493 ms 00:17:24.976 [2024-11-17 23:22:48.695300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.976 [2024-11-17 23:22:48.695414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.976 [2024-11-17 23:22:48.695422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:24.976 [2024-11-17 23:22:48.695430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:17:24.976 [2024-11-17 23:22:48.695437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.976 [2024-11-17 23:22:48.697862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.976 [2024-11-17 23:22:48.697901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:24.976 [2024-11-17 23:22:48.697909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.405 ms 00:17:24.976 [2024-11-17 23:22:48.697916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.976 [2024-11-17 23:22:48.700195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.976 [2024-11-17 23:22:48.700223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:24.976 [2024-11-17 23:22:48.700231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.250 ms 00:17:24.976 [2024-11-17 23:22:48.700237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.976 [2024-11-17 23:22:48.701821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.976 [2024-11-17 23:22:48.701942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:24.976 [2024-11-17 23:22:48.701956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.555 ms 00:17:24.976 [2024-11-17 23:22:48.701963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.976 [2024-11-17 23:22:48.703751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.976 [2024-11-17 23:22:48.703788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:24.976 [2024-11-17 23:22:48.703799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.497 ms 00:17:24.976 [2024-11-17 23:22:48.703805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.976 [2024-11-17 23:22:48.703835] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:24.976 [2024-11-17 23:22:48.703849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.703858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.703866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.703873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.703891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.703899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.703906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.703913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.703921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.703928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.703935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.703943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.703950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.703957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.703964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.703971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.703979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.703986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.703993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:24.976 [2024-11-17 23:22:48.704570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:24.977 [2024-11-17 23:22:48.704577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:24.977 [2024-11-17 23:22:48.704584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:24.977 [2024-11-17 23:22:48.704591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:24.977 [2024-11-17 23:22:48.704607] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:24.977 [2024-11-17 23:22:48.704614] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: af926b10-56ee-4fe0-b2b0-4ca61c941c28 00:17:24.977 [2024-11-17 23:22:48.704622] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:24.977 [2024-11-17 23:22:48.704629] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:24.977 [2024-11-17 23:22:48.704636] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:24.977 [2024-11-17 23:22:48.704643] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:24.977 [2024-11-17 23:22:48.704649] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:24.977 [2024-11-17 23:22:48.704657] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:24.977 [2024-11-17 23:22:48.704664] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:24.977 [2024-11-17 23:22:48.704670] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:24.977 [2024-11-17 23:22:48.704676] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:24.977 [2024-11-17 23:22:48.704683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.977 [2024-11-17 23:22:48.704693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:24.977 [2024-11-17 23:22:48.704701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.849 ms 00:17:24.977 [2024-11-17 23:22:48.704708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.977 [2024-11-17 23:22:48.706106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.977 [2024-11-17 23:22:48.706132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:24.977 [2024-11-17 23:22:48.706140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.382 ms 00:17:24.977 [2024-11-17 23:22:48.706147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.977 [2024-11-17 23:22:48.706223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.977 [2024-11-17 23:22:48.706231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:24.977 [2024-11-17 23:22:48.706239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:17:24.977 [2024-11-17 23:22:48.706245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.977 [2024-11-17 23:22:48.711116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.977 [2024-11-17 23:22:48.711144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:24.977 [2024-11-17 23:22:48.711154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.977 [2024-11-17 23:22:48.711161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.977 [2024-11-17 23:22:48.711231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.977 [2024-11-17 23:22:48.711240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:24.977 [2024-11-17 23:22:48.711248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.977 [2024-11-17 23:22:48.711254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.977 [2024-11-17 23:22:48.711293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.977 [2024-11-17 23:22:48.711302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:24.977 [2024-11-17 23:22:48.711309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.977 [2024-11-17 23:22:48.711316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.977 [2024-11-17 23:22:48.711332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.977 [2024-11-17 23:22:48.711342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:24.977 [2024-11-17 23:22:48.711348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.977 [2024-11-17 23:22:48.711355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.977 [2024-11-17 23:22:48.719563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.977 [2024-11-17 23:22:48.719596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:24.977 [2024-11-17 23:22:48.719606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.977 [2024-11-17 23:22:48.719613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.977 [2024-11-17 23:22:48.726279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.977 [2024-11-17 23:22:48.726315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:24.977 [2024-11-17 23:22:48.726325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.977 [2024-11-17 23:22:48.726333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.977 [2024-11-17 23:22:48.726380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.977 [2024-11-17 23:22:48.726388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:24.977 [2024-11-17 23:22:48.726396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.977 [2024-11-17 23:22:48.726403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.977 [2024-11-17 23:22:48.726430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.977 [2024-11-17 23:22:48.726438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:24.977 [2024-11-17 23:22:48.726448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.977 [2024-11-17 23:22:48.726456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.977 [2024-11-17 23:22:48.726515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.977 [2024-11-17 23:22:48.726524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:24.977 [2024-11-17 23:22:48.726532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.977 [2024-11-17 23:22:48.726539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.977 [2024-11-17 23:22:48.726576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.977 [2024-11-17 23:22:48.726584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:24.977 [2024-11-17 23:22:48.726595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.977 [2024-11-17 23:22:48.726602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.977 [2024-11-17 23:22:48.726638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.977 [2024-11-17 23:22:48.726646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:24.977 [2024-11-17 23:22:48.726654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.977 [2024-11-17 23:22:48.726661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.977 [2024-11-17 23:22:48.726703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.977 [2024-11-17 23:22:48.726712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:24.977 [2024-11-17 23:22:48.726722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.977 [2024-11-17 23:22:48.726729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.977 [2024-11-17 23:22:48.726861] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 49.227 ms, result 0 00:17:25.235 00:17:25.235 00:17:25.235 23:22:48 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:25.235 23:22:48 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:25.811 23:22:49 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:25.811 [2024-11-17 23:22:49.494791] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:17:25.811 [2024-11-17 23:22:49.494917] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85285 ] 00:17:26.070 [2024-11-17 23:22:49.641462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.070 [2024-11-17 23:22:49.660028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.070 [2024-11-17 23:22:49.745487] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:26.070 [2024-11-17 23:22:49.745548] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:26.329 [2024-11-17 23:22:49.899154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.329 [2024-11-17 23:22:49.899307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:26.329 [2024-11-17 23:22:49.899325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:26.329 [2024-11-17 23:22:49.899334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.329 [2024-11-17 23:22:49.901626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.329 [2024-11-17 23:22:49.901659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:26.329 [2024-11-17 23:22:49.901673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.270 ms 00:17:26.329 [2024-11-17 23:22:49.901684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.329 [2024-11-17 23:22:49.901757] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:26.329 [2024-11-17 23:22:49.901992] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:26.329 [2024-11-17 23:22:49.902006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.329 [2024-11-17 23:22:49.902016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:26.329 [2024-11-17 23:22:49.902024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:17:26.329 [2024-11-17 23:22:49.902031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.329 [2024-11-17 23:22:49.903114] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:26.329 [2024-11-17 23:22:49.905618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.329 [2024-11-17 23:22:49.905650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:26.329 [2024-11-17 23:22:49.905663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.506 ms 00:17:26.329 [2024-11-17 23:22:49.905670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.329 [2024-11-17 23:22:49.905723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.329 [2024-11-17 23:22:49.905733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:26.329 [2024-11-17 23:22:49.905741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:26.329 [2024-11-17 23:22:49.905748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.329 [2024-11-17 23:22:49.910372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.329 [2024-11-17 23:22:49.910397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:26.329 [2024-11-17 23:22:49.910406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.588 ms 00:17:26.329 [2024-11-17 23:22:49.910413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.329 [2024-11-17 23:22:49.910519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.329 [2024-11-17 23:22:49.910530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:26.329 [2024-11-17 23:22:49.910537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:17:26.329 [2024-11-17 23:22:49.910544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.329 [2024-11-17 23:22:49.910570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.329 [2024-11-17 23:22:49.910579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:26.329 [2024-11-17 23:22:49.910586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:26.329 [2024-11-17 23:22:49.910593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.329 [2024-11-17 23:22:49.910616] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:26.329 [2024-11-17 23:22:49.911914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.329 [2024-11-17 23:22:49.911934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:26.329 [2024-11-17 23:22:49.911943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.307 ms 00:17:26.329 [2024-11-17 23:22:49.911950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.329 [2024-11-17 23:22:49.911988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.329 [2024-11-17 23:22:49.911998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:26.329 [2024-11-17 23:22:49.912005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:26.329 [2024-11-17 23:22:49.912015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.329 [2024-11-17 23:22:49.912032] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:26.329 [2024-11-17 23:22:49.912051] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:26.329 [2024-11-17 23:22:49.912087] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:26.329 [2024-11-17 23:22:49.912113] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:26.329 [2024-11-17 23:22:49.912214] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:26.329 [2024-11-17 23:22:49.912224] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:26.329 [2024-11-17 23:22:49.912237] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:26.329 [2024-11-17 23:22:49.912247] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:26.329 [2024-11-17 23:22:49.912255] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:26.329 [2024-11-17 23:22:49.912262] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:26.329 [2024-11-17 23:22:49.912270] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:26.329 [2024-11-17 23:22:49.912276] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:26.329 [2024-11-17 23:22:49.912286] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:26.329 [2024-11-17 23:22:49.912295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.329 [2024-11-17 23:22:49.912306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:26.329 [2024-11-17 23:22:49.912313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:17:26.329 [2024-11-17 23:22:49.912320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.329 [2024-11-17 23:22:49.912406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.329 [2024-11-17 23:22:49.912417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:26.330 [2024-11-17 23:22:49.912424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:26.330 [2024-11-17 23:22:49.912430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.330 [2024-11-17 23:22:49.912529] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:26.330 [2024-11-17 23:22:49.912538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:26.330 [2024-11-17 23:22:49.912548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:26.330 [2024-11-17 23:22:49.912555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.330 [2024-11-17 23:22:49.912566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:26.330 [2024-11-17 23:22:49.912573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:26.330 [2024-11-17 23:22:49.912581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:26.330 [2024-11-17 23:22:49.912589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:26.330 [2024-11-17 23:22:49.912598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:26.330 [2024-11-17 23:22:49.912606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:26.330 [2024-11-17 23:22:49.912614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:26.330 [2024-11-17 23:22:49.912621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:26.330 [2024-11-17 23:22:49.912628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:26.330 [2024-11-17 23:22:49.912636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:26.330 [2024-11-17 23:22:49.912643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:26.330 [2024-11-17 23:22:49.912650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.330 [2024-11-17 23:22:49.912658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:26.330 [2024-11-17 23:22:49.912666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:26.330 [2024-11-17 23:22:49.912673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.330 [2024-11-17 23:22:49.912681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:26.330 [2024-11-17 23:22:49.912689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:26.330 [2024-11-17 23:22:49.912697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.330 [2024-11-17 23:22:49.912705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:26.330 [2024-11-17 23:22:49.912712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:26.330 [2024-11-17 23:22:49.912723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.330 [2024-11-17 23:22:49.912730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:26.330 [2024-11-17 23:22:49.912737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:26.330 [2024-11-17 23:22:49.912745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.330 [2024-11-17 23:22:49.912752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:26.330 [2024-11-17 23:22:49.912759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:26.330 [2024-11-17 23:22:49.912767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.330 [2024-11-17 23:22:49.912774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:26.330 [2024-11-17 23:22:49.912781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:26.330 [2024-11-17 23:22:49.912788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:26.330 [2024-11-17 23:22:49.912795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:26.330 [2024-11-17 23:22:49.912803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:26.330 [2024-11-17 23:22:49.912810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:26.330 [2024-11-17 23:22:49.912817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:26.330 [2024-11-17 23:22:49.912824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:26.330 [2024-11-17 23:22:49.912831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.330 [2024-11-17 23:22:49.912841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:26.330 [2024-11-17 23:22:49.912849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:26.330 [2024-11-17 23:22:49.912855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.330 [2024-11-17 23:22:49.912863] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:26.330 [2024-11-17 23:22:49.912871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:26.330 [2024-11-17 23:22:49.912890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:26.330 [2024-11-17 23:22:49.912901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.330 [2024-11-17 23:22:49.912910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:26.330 [2024-11-17 23:22:49.912917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:26.330 [2024-11-17 23:22:49.912925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:26.330 [2024-11-17 23:22:49.912932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:26.330 [2024-11-17 23:22:49.912939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:26.330 [2024-11-17 23:22:49.912945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:26.330 [2024-11-17 23:22:49.912955] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:26.330 [2024-11-17 23:22:49.912968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:26.330 [2024-11-17 23:22:49.912976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:26.330 [2024-11-17 23:22:49.912985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:26.330 [2024-11-17 23:22:49.912992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:26.330 [2024-11-17 23:22:49.912999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:26.330 [2024-11-17 23:22:49.913006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:26.330 [2024-11-17 23:22:49.913013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:26.330 [2024-11-17 23:22:49.913020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:26.330 [2024-11-17 23:22:49.913027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:26.330 [2024-11-17 23:22:49.913034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:26.330 [2024-11-17 23:22:49.913045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:26.330 [2024-11-17 23:22:49.913051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:26.330 [2024-11-17 23:22:49.913058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:26.330 [2024-11-17 23:22:49.913065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:26.330 [2024-11-17 23:22:49.913072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:26.330 [2024-11-17 23:22:49.913078] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:26.330 [2024-11-17 23:22:49.913089] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:26.330 [2024-11-17 23:22:49.913102] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:26.330 [2024-11-17 23:22:49.913111] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:26.330 [2024-11-17 23:22:49.913118] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:26.330 [2024-11-17 23:22:49.913125] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:26.330 [2024-11-17 23:22:49.913133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.330 [2024-11-17 23:22:49.913140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:26.330 [2024-11-17 23:22:49.913152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.672 ms 00:17:26.330 [2024-11-17 23:22:49.913159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.330 [2024-11-17 23:22:49.921588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.330 [2024-11-17 23:22:49.921715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:26.330 [2024-11-17 23:22:49.921730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.374 ms 00:17:26.330 [2024-11-17 23:22:49.921738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.330 [2024-11-17 23:22:49.921846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.331 [2024-11-17 23:22:49.921855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:26.331 [2024-11-17 23:22:49.921868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:17:26.331 [2024-11-17 23:22:49.921875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.331 [2024-11-17 23:22:49.940254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.331 [2024-11-17 23:22:49.940430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:26.331 [2024-11-17 23:22:49.940512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.331 ms 00:17:26.331 [2024-11-17 23:22:49.940548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.331 [2024-11-17 23:22:49.940679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.331 [2024-11-17 23:22:49.940726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:26.331 [2024-11-17 23:22:49.940831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:26.331 [2024-11-17 23:22:49.940867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.331 [2024-11-17 23:22:49.941280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.331 [2024-11-17 23:22:49.941409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:26.331 [2024-11-17 23:22:49.941497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:17:26.331 [2024-11-17 23:22:49.941533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.331 [2024-11-17 23:22:49.941776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.331 [2024-11-17 23:22:49.941853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:26.331 [2024-11-17 23:22:49.941959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:17:26.331 [2024-11-17 23:22:49.941996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.331 [2024-11-17 23:22:49.948418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.331 [2024-11-17 23:22:49.948512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:26.331 [2024-11-17 23:22:49.948556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.339 ms 00:17:26.331 [2024-11-17 23:22:49.948581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.331 [2024-11-17 23:22:49.951220] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:26.331 [2024-11-17 23:22:49.951329] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:26.331 [2024-11-17 23:22:49.951383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.331 [2024-11-17 23:22:49.951403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:26.331 [2024-11-17 23:22:49.951422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.685 ms 00:17:26.331 [2024-11-17 23:22:49.951440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.331 [2024-11-17 23:22:49.965754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.331 [2024-11-17 23:22:49.965857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:26.331 [2024-11-17 23:22:49.965918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.254 ms 00:17:26.331 [2024-11-17 23:22:49.965940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.331 [2024-11-17 23:22:49.967849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.331 [2024-11-17 23:22:49.967951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:26.331 [2024-11-17 23:22:49.967995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.837 ms 00:17:26.331 [2024-11-17 23:22:49.968016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.331 [2024-11-17 23:22:49.969610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.331 [2024-11-17 23:22:49.969717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:26.331 [2024-11-17 23:22:49.969764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.552 ms 00:17:26.331 [2024-11-17 23:22:49.969784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.331 [2024-11-17 23:22:49.970610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.331 [2024-11-17 23:22:49.970780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:26.331 [2024-11-17 23:22:49.970840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:17:26.331 [2024-11-17 23:22:49.970869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.331 [2024-11-17 23:22:49.986689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.331 [2024-11-17 23:22:49.986862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:26.331 [2024-11-17 23:22:49.986932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.764 ms 00:17:26.331 [2024-11-17 23:22:49.986956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.331 [2024-11-17 23:22:49.994360] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:26.331 [2024-11-17 23:22:50.008752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.331 [2024-11-17 23:22:50.008874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:26.331 [2024-11-17 23:22:50.008937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.698 ms 00:17:26.331 [2024-11-17 23:22:50.008969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.331 [2024-11-17 23:22:50.009075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.331 [2024-11-17 23:22:50.009111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:26.331 [2024-11-17 23:22:50.009142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:26.331 [2024-11-17 23:22:50.009177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.331 [2024-11-17 23:22:50.009270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.331 [2024-11-17 23:22:50.009389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:26.331 [2024-11-17 23:22:50.009430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:17:26.331 [2024-11-17 23:22:50.009463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.331 [2024-11-17 23:22:50.009528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.331 [2024-11-17 23:22:50.009674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:26.331 [2024-11-17 23:22:50.009711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:26.331 [2024-11-17 23:22:50.009731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.331 [2024-11-17 23:22:50.009783] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:26.331 [2024-11-17 23:22:50.009808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.331 [2024-11-17 23:22:50.009858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:26.331 [2024-11-17 23:22:50.009898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:17:26.331 [2024-11-17 23:22:50.009919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.331 [2024-11-17 23:22:50.014301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.331 [2024-11-17 23:22:50.014430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:26.331 [2024-11-17 23:22:50.014493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.346 ms 00:17:26.331 [2024-11-17 23:22:50.014524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.331 [2024-11-17 23:22:50.014642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.331 [2024-11-17 23:22:50.014677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:26.331 [2024-11-17 23:22:50.014709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:26.331 [2024-11-17 23:22:50.014736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.331 [2024-11-17 23:22:50.015843] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:26.331 [2024-11-17 23:22:50.017213] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 116.312 ms, result 0 00:17:26.331 [2024-11-17 23:22:50.018715] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:26.331 [2024-11-17 23:22:50.025923] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:26.591  [2024-11-17T23:22:50.412Z] Copying: 4096/4096 [kB] (average 19 MBps)[2024-11-17 23:22:50.228827] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:26.591 [2024-11-17 23:22:50.229834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.591 [2024-11-17 23:22:50.229871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:26.591 [2024-11-17 23:22:50.229897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:26.591 [2024-11-17 23:22:50.229905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.591 [2024-11-17 23:22:50.229925] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:26.591 [2024-11-17 23:22:50.230325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.591 [2024-11-17 23:22:50.230350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:26.591 [2024-11-17 23:22:50.230359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.387 ms 00:17:26.591 [2024-11-17 23:22:50.230367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.591 [2024-11-17 23:22:50.232018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.591 [2024-11-17 23:22:50.232054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:26.591 [2024-11-17 23:22:50.232063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.633 ms 00:17:26.591 [2024-11-17 23:22:50.232075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.591 [2024-11-17 23:22:50.236415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.591 [2024-11-17 23:22:50.236442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:26.591 [2024-11-17 23:22:50.236456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.324 ms 00:17:26.591 [2024-11-17 23:22:50.236463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.591 [2024-11-17 23:22:50.243390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.591 [2024-11-17 23:22:50.243504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:26.591 [2024-11-17 23:22:50.243519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.901 ms 00:17:26.591 [2024-11-17 23:22:50.243526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.591 [2024-11-17 23:22:50.245951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.591 [2024-11-17 23:22:50.245992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:26.591 [2024-11-17 23:22:50.246004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.375 ms 00:17:26.591 [2024-11-17 23:22:50.246012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.591 [2024-11-17 23:22:50.249628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.591 [2024-11-17 23:22:50.249668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:26.591 [2024-11-17 23:22:50.249678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.582 ms 00:17:26.591 [2024-11-17 23:22:50.249686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.591 [2024-11-17 23:22:50.249802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.591 [2024-11-17 23:22:50.249812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:26.591 [2024-11-17 23:22:50.249826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:17:26.591 [2024-11-17 23:22:50.249836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.591 [2024-11-17 23:22:50.252603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.591 [2024-11-17 23:22:50.252633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:26.591 [2024-11-17 23:22:50.252642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.751 ms 00:17:26.591 [2024-11-17 23:22:50.252648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.591 [2024-11-17 23:22:50.254876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.591 [2024-11-17 23:22:50.254910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:26.591 [2024-11-17 23:22:50.254918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.198 ms 00:17:26.591 [2024-11-17 23:22:50.254925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.591 [2024-11-17 23:22:50.256660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.591 [2024-11-17 23:22:50.256689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:26.591 [2024-11-17 23:22:50.256697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.707 ms 00:17:26.591 [2024-11-17 23:22:50.256704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.591 [2024-11-17 23:22:50.258215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.591 [2024-11-17 23:22:50.258243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:26.591 [2024-11-17 23:22:50.258251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.459 ms 00:17:26.591 [2024-11-17 23:22:50.258257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.591 [2024-11-17 23:22:50.258284] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:26.591 [2024-11-17 23:22:50.258298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:26.591 [2024-11-17 23:22:50.258307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:26.591 [2024-11-17 23:22:50.258314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:26.591 [2024-11-17 23:22:50.258321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:26.591 [2024-11-17 23:22:50.258328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:26.591 [2024-11-17 23:22:50.258336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:26.591 [2024-11-17 23:22:50.258343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:26.591 [2024-11-17 23:22:50.258350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:26.591 [2024-11-17 23:22:50.258357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:26.591 [2024-11-17 23:22:50.258364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:26.591 [2024-11-17 23:22:50.258371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:26.591 [2024-11-17 23:22:50.258379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:26.591 [2024-11-17 23:22:50.258386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:26.591 [2024-11-17 23:22:50.258393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:26.591 [2024-11-17 23:22:50.258400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:26.591 [2024-11-17 23:22:50.258407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:26.591 [2024-11-17 23:22:50.258414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:26.591 [2024-11-17 23:22:50.258421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:26.591 [2024-11-17 23:22:50.258428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:26.591 [2024-11-17 23:22:50.258435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:26.591 [2024-11-17 23:22:50.258442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:26.591 [2024-11-17 23:22:50.258449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:26.591 [2024-11-17 23:22:50.258456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:26.591 [2024-11-17 23:22:50.258463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:26.591 [2024-11-17 23:22:50.258470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:26.591 [2024-11-17 23:22:50.258478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:26.591 [2024-11-17 23:22:50.258486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:26.591 [2024-11-17 23:22:50.258493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:26.591 [2024-11-17 23:22:50.258500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:26.591 [2024-11-17 23:22:50.258507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:26.591 [2024-11-17 23:22:50.258515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:26.591 [2024-11-17 23:22:50.258530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:26.591 [2024-11-17 23:22:50.258537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:26.591 [2024-11-17 23:22:50.258545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:26.591 [2024-11-17 23:22:50.258552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:26.591 [2024-11-17 23:22:50.258559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:26.591 [2024-11-17 23:22:50.258567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:26.591 [2024-11-17 23:22:50.258574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:26.591 [2024-11-17 23:22:50.258581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:26.591 [2024-11-17 23:22:50.258588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.258995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.259003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.259010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.259017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.259024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.259032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:26.592 [2024-11-17 23:22:50.259048] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:26.592 [2024-11-17 23:22:50.259055] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: af926b10-56ee-4fe0-b2b0-4ca61c941c28 00:17:26.592 [2024-11-17 23:22:50.259063] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:26.592 [2024-11-17 23:22:50.259070] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:26.592 [2024-11-17 23:22:50.259077] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:26.592 [2024-11-17 23:22:50.259084] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:26.592 [2024-11-17 23:22:50.259090] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:26.592 [2024-11-17 23:22:50.259097] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:26.592 [2024-11-17 23:22:50.259107] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:26.592 [2024-11-17 23:22:50.259113] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:26.592 [2024-11-17 23:22:50.259119] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:26.592 [2024-11-17 23:22:50.259125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.592 [2024-11-17 23:22:50.259133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:26.592 [2024-11-17 23:22:50.259140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.842 ms 00:17:26.592 [2024-11-17 23:22:50.259147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.592 [2024-11-17 23:22:50.260524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.592 [2024-11-17 23:22:50.260545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:26.592 [2024-11-17 23:22:50.260554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.362 ms 00:17:26.592 [2024-11-17 23:22:50.260561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.592 [2024-11-17 23:22:50.260637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.592 [2024-11-17 23:22:50.260645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:26.592 [2024-11-17 23:22:50.260653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:17:26.592 [2024-11-17 23:22:50.260659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.592 [2024-11-17 23:22:50.265515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.592 [2024-11-17 23:22:50.265546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:26.592 [2024-11-17 23:22:50.265559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.592 [2024-11-17 23:22:50.265566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.592 [2024-11-17 23:22:50.265634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.592 [2024-11-17 23:22:50.265642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:26.592 [2024-11-17 23:22:50.265653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.592 [2024-11-17 23:22:50.265660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.592 [2024-11-17 23:22:50.265695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.592 [2024-11-17 23:22:50.265704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:26.592 [2024-11-17 23:22:50.265711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.592 [2024-11-17 23:22:50.265718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.592 [2024-11-17 23:22:50.265739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.592 [2024-11-17 23:22:50.265747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:26.593 [2024-11-17 23:22:50.265754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.593 [2024-11-17 23:22:50.265763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.593 [2024-11-17 23:22:50.274042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.593 [2024-11-17 23:22:50.274075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:26.593 [2024-11-17 23:22:50.274084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.593 [2024-11-17 23:22:50.274092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.593 [2024-11-17 23:22:50.280714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.593 [2024-11-17 23:22:50.280859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:26.593 [2024-11-17 23:22:50.280875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.593 [2024-11-17 23:22:50.280948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.593 [2024-11-17 23:22:50.280992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.593 [2024-11-17 23:22:50.281002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:26.593 [2024-11-17 23:22:50.281010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.593 [2024-11-17 23:22:50.281017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.593 [2024-11-17 23:22:50.281044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.593 [2024-11-17 23:22:50.281057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:26.593 [2024-11-17 23:22:50.281064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.593 [2024-11-17 23:22:50.281071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.593 [2024-11-17 23:22:50.281147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.593 [2024-11-17 23:22:50.281157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:26.593 [2024-11-17 23:22:50.281165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.593 [2024-11-17 23:22:50.281172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.593 [2024-11-17 23:22:50.281206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.593 [2024-11-17 23:22:50.281214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:26.593 [2024-11-17 23:22:50.281224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.593 [2024-11-17 23:22:50.281231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.593 [2024-11-17 23:22:50.281267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.593 [2024-11-17 23:22:50.281275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:26.593 [2024-11-17 23:22:50.281283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.593 [2024-11-17 23:22:50.281290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.593 [2024-11-17 23:22:50.281337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.593 [2024-11-17 23:22:50.281348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:26.593 [2024-11-17 23:22:50.281356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.593 [2024-11-17 23:22:50.281363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.593 [2024-11-17 23:22:50.281498] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 51.647 ms, result 0 00:17:26.851 00:17:26.851 00:17:26.851 23:22:50 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=85299 00:17:26.851 23:22:50 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 85299 00:17:26.851 23:22:50 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 85299 ']' 00:17:26.851 23:22:50 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:26.851 23:22:50 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.851 23:22:50 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:26.851 23:22:50 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.851 23:22:50 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:26.851 23:22:50 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:26.851 [2024-11-17 23:22:50.508991] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:17:26.851 [2024-11-17 23:22:50.509118] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85299 ] 00:17:26.851 [2024-11-17 23:22:50.654807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.109 [2024-11-17 23:22:50.673552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.689 23:22:51 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:27.689 23:22:51 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:17:27.689 23:22:51 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:27.950 [2024-11-17 23:22:51.542712] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:27.950 [2024-11-17 23:22:51.542772] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:27.950 [2024-11-17 23:22:51.712291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.950 [2024-11-17 23:22:51.712453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:27.950 [2024-11-17 23:22:51.712473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:27.950 [2024-11-17 23:22:51.712483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.950 [2024-11-17 23:22:51.714950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.950 [2024-11-17 23:22:51.714991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:27.950 [2024-11-17 23:22:51.715001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.445 ms 00:17:27.950 [2024-11-17 23:22:51.715010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.950 [2024-11-17 23:22:51.715101] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:27.950 [2024-11-17 23:22:51.715331] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:27.950 [2024-11-17 23:22:51.715346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.950 [2024-11-17 23:22:51.715360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:27.950 [2024-11-17 23:22:51.715369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:17:27.950 [2024-11-17 23:22:51.715381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.950 [2024-11-17 23:22:51.716528] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:27.950 [2024-11-17 23:22:51.719107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.950 [2024-11-17 23:22:51.719229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:27.950 [2024-11-17 23:22:51.719248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.577 ms 00:17:27.950 [2024-11-17 23:22:51.719256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.950 [2024-11-17 23:22:51.719307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.950 [2024-11-17 23:22:51.719317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:27.950 [2024-11-17 23:22:51.719328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:27.950 [2024-11-17 23:22:51.719335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.950 [2024-11-17 23:22:51.723990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.950 [2024-11-17 23:22:51.724019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:27.950 [2024-11-17 23:22:51.724029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.605 ms 00:17:27.950 [2024-11-17 23:22:51.724036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.950 [2024-11-17 23:22:51.724146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.950 [2024-11-17 23:22:51.724157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:27.950 [2024-11-17 23:22:51.724167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:17:27.950 [2024-11-17 23:22:51.724177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.950 [2024-11-17 23:22:51.724203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.950 [2024-11-17 23:22:51.724211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:27.950 [2024-11-17 23:22:51.724222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:27.950 [2024-11-17 23:22:51.724229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.950 [2024-11-17 23:22:51.724253] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:27.950 [2024-11-17 23:22:51.725540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.950 [2024-11-17 23:22:51.725570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:27.950 [2024-11-17 23:22:51.725579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.294 ms 00:17:27.950 [2024-11-17 23:22:51.725589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.950 [2024-11-17 23:22:51.725629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.950 [2024-11-17 23:22:51.725639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:27.950 [2024-11-17 23:22:51.725650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:27.950 [2024-11-17 23:22:51.725659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.950 [2024-11-17 23:22:51.725680] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:27.950 [2024-11-17 23:22:51.725698] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:27.950 [2024-11-17 23:22:51.725732] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:27.950 [2024-11-17 23:22:51.725749] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:27.950 [2024-11-17 23:22:51.725851] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:27.950 [2024-11-17 23:22:51.725862] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:27.950 [2024-11-17 23:22:51.725872] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:27.950 [2024-11-17 23:22:51.725900] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:27.950 [2024-11-17 23:22:51.725909] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:27.950 [2024-11-17 23:22:51.725921] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:27.950 [2024-11-17 23:22:51.725929] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:27.950 [2024-11-17 23:22:51.725938] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:27.950 [2024-11-17 23:22:51.725947] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:27.950 [2024-11-17 23:22:51.725956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.950 [2024-11-17 23:22:51.725966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:27.950 [2024-11-17 23:22:51.725975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:17:27.950 [2024-11-17 23:22:51.725985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.950 [2024-11-17 23:22:51.726073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.950 [2024-11-17 23:22:51.726080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:27.950 [2024-11-17 23:22:51.726092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:27.951 [2024-11-17 23:22:51.726098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.951 [2024-11-17 23:22:51.726198] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:27.951 [2024-11-17 23:22:51.726210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:27.951 [2024-11-17 23:22:51.726220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:27.951 [2024-11-17 23:22:51.726228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:27.951 [2024-11-17 23:22:51.726241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:27.951 [2024-11-17 23:22:51.726249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:27.951 [2024-11-17 23:22:51.726258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:27.951 [2024-11-17 23:22:51.726266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:27.951 [2024-11-17 23:22:51.726276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:27.951 [2024-11-17 23:22:51.726283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:27.951 [2024-11-17 23:22:51.726292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:27.951 [2024-11-17 23:22:51.726300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:27.951 [2024-11-17 23:22:51.726309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:27.951 [2024-11-17 23:22:51.726317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:27.951 [2024-11-17 23:22:51.726326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:27.951 [2024-11-17 23:22:51.726333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:27.951 [2024-11-17 23:22:51.726342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:27.951 [2024-11-17 23:22:51.726349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:27.951 [2024-11-17 23:22:51.726358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:27.951 [2024-11-17 23:22:51.726365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:27.951 [2024-11-17 23:22:51.726376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:27.951 [2024-11-17 23:22:51.726383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:27.951 [2024-11-17 23:22:51.726392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:27.951 [2024-11-17 23:22:51.726399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:27.951 [2024-11-17 23:22:51.726408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:27.951 [2024-11-17 23:22:51.726416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:27.951 [2024-11-17 23:22:51.726427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:27.951 [2024-11-17 23:22:51.726434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:27.951 [2024-11-17 23:22:51.726443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:27.951 [2024-11-17 23:22:51.726450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:27.951 [2024-11-17 23:22:51.726459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:27.951 [2024-11-17 23:22:51.726466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:27.951 [2024-11-17 23:22:51.726477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:27.951 [2024-11-17 23:22:51.726484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:27.951 [2024-11-17 23:22:51.726493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:27.951 [2024-11-17 23:22:51.726500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:27.951 [2024-11-17 23:22:51.726510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:27.951 [2024-11-17 23:22:51.726518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:27.951 [2024-11-17 23:22:51.726526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:27.951 [2024-11-17 23:22:51.726534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:27.951 [2024-11-17 23:22:51.726543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:27.951 [2024-11-17 23:22:51.726550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:27.951 [2024-11-17 23:22:51.726559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:27.951 [2024-11-17 23:22:51.726566] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:27.951 [2024-11-17 23:22:51.726574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:27.951 [2024-11-17 23:22:51.726581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:27.951 [2024-11-17 23:22:51.726589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:27.951 [2024-11-17 23:22:51.726597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:27.951 [2024-11-17 23:22:51.726605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:27.951 [2024-11-17 23:22:51.726611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:27.951 [2024-11-17 23:22:51.726619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:27.951 [2024-11-17 23:22:51.726625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:27.951 [2024-11-17 23:22:51.726634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:27.951 [2024-11-17 23:22:51.726642] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:27.951 [2024-11-17 23:22:51.726652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:27.951 [2024-11-17 23:22:51.726660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:27.951 [2024-11-17 23:22:51.726669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:27.951 [2024-11-17 23:22:51.726676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:27.951 [2024-11-17 23:22:51.726686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:27.951 [2024-11-17 23:22:51.726694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:27.951 [2024-11-17 23:22:51.726702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:27.951 [2024-11-17 23:22:51.726709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:27.951 [2024-11-17 23:22:51.726718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:27.951 [2024-11-17 23:22:51.726725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:27.951 [2024-11-17 23:22:51.726734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:27.951 [2024-11-17 23:22:51.726740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:27.951 [2024-11-17 23:22:51.726749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:27.951 [2024-11-17 23:22:51.726756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:27.951 [2024-11-17 23:22:51.726771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:27.951 [2024-11-17 23:22:51.726779] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:27.951 [2024-11-17 23:22:51.726790] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:27.951 [2024-11-17 23:22:51.726798] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:27.951 [2024-11-17 23:22:51.726806] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:27.951 [2024-11-17 23:22:51.726814] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:27.951 [2024-11-17 23:22:51.726822] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:27.951 [2024-11-17 23:22:51.726829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.951 [2024-11-17 23:22:51.726838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:27.951 [2024-11-17 23:22:51.726845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.700 ms 00:17:27.951 [2024-11-17 23:22:51.726853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.951 [2024-11-17 23:22:51.735436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.952 [2024-11-17 23:22:51.735551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:27.952 [2024-11-17 23:22:51.735600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.322 ms 00:17:27.952 [2024-11-17 23:22:51.735625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.952 [2024-11-17 23:22:51.735759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.952 [2024-11-17 23:22:51.735793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:27.952 [2024-11-17 23:22:51.735812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:17:27.952 [2024-11-17 23:22:51.735832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.952 [2024-11-17 23:22:51.744024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.952 [2024-11-17 23:22:51.744132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:27.952 [2024-11-17 23:22:51.744180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.159 ms 00:17:27.952 [2024-11-17 23:22:51.744205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.952 [2024-11-17 23:22:51.744263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.952 [2024-11-17 23:22:51.744289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:27.952 [2024-11-17 23:22:51.744309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:27.952 [2024-11-17 23:22:51.744329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.952 [2024-11-17 23:22:51.744633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.952 [2024-11-17 23:22:51.744678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:27.952 [2024-11-17 23:22:51.744699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:17:27.952 [2024-11-17 23:22:51.744719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.952 [2024-11-17 23:22:51.744851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.952 [2024-11-17 23:22:51.744907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:27.952 [2024-11-17 23:22:51.744932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:17:27.952 [2024-11-17 23:22:51.744952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.952 [2024-11-17 23:22:51.750193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.952 [2024-11-17 23:22:51.750295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:27.952 [2024-11-17 23:22:51.750343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.170 ms 00:17:27.952 [2024-11-17 23:22:51.750387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.952 [2024-11-17 23:22:51.752798] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:27.952 [2024-11-17 23:22:51.752948] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:27.952 [2024-11-17 23:22:51.753006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.952 [2024-11-17 23:22:51.753029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:27.952 [2024-11-17 23:22:51.753141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.503 ms 00:17:27.952 [2024-11-17 23:22:51.753211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.211 [2024-11-17 23:22:51.767603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.211 [2024-11-17 23:22:51.767712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:28.211 [2024-11-17 23:22:51.767760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.330 ms 00:17:28.211 [2024-11-17 23:22:51.767786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.211 [2024-11-17 23:22:51.769663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.211 [2024-11-17 23:22:51.769759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:28.211 [2024-11-17 23:22:51.769804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.807 ms 00:17:28.211 [2024-11-17 23:22:51.769827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.211 [2024-11-17 23:22:51.771500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.211 [2024-11-17 23:22:51.771593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:28.211 [2024-11-17 23:22:51.771606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.631 ms 00:17:28.211 [2024-11-17 23:22:51.771615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.211 [2024-11-17 23:22:51.771956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.211 [2024-11-17 23:22:51.771978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:28.211 [2024-11-17 23:22:51.771987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:17:28.211 [2024-11-17 23:22:51.771995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.211 [2024-11-17 23:22:51.803216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.211 [2024-11-17 23:22:51.803271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:28.211 [2024-11-17 23:22:51.803285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.185 ms 00:17:28.211 [2024-11-17 23:22:51.803297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.211 [2024-11-17 23:22:51.810661] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:28.211 [2024-11-17 23:22:51.824049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.211 [2024-11-17 23:22:51.824197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:28.211 [2024-11-17 23:22:51.824217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.693 ms 00:17:28.211 [2024-11-17 23:22:51.824226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.211 [2024-11-17 23:22:51.824318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.211 [2024-11-17 23:22:51.824329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:28.211 [2024-11-17 23:22:51.824342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:28.211 [2024-11-17 23:22:51.824349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.211 [2024-11-17 23:22:51.824398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.211 [2024-11-17 23:22:51.824408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:28.211 [2024-11-17 23:22:51.824418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:28.211 [2024-11-17 23:22:51.824425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.211 [2024-11-17 23:22:51.824452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.211 [2024-11-17 23:22:51.824460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:28.211 [2024-11-17 23:22:51.824470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:28.211 [2024-11-17 23:22:51.824479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.211 [2024-11-17 23:22:51.824512] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:28.211 [2024-11-17 23:22:51.824521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.211 [2024-11-17 23:22:51.824530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:28.211 [2024-11-17 23:22:51.824537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:28.211 [2024-11-17 23:22:51.824545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.211 [2024-11-17 23:22:51.828497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.211 [2024-11-17 23:22:51.828534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:28.211 [2024-11-17 23:22:51.828544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.927 ms 00:17:28.211 [2024-11-17 23:22:51.828556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.211 [2024-11-17 23:22:51.828639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.211 [2024-11-17 23:22:51.828651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:28.211 [2024-11-17 23:22:51.828660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:28.211 [2024-11-17 23:22:51.828669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.211 [2024-11-17 23:22:51.829443] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:28.211 [2024-11-17 23:22:51.830413] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 116.875 ms, result 0 00:17:28.211 [2024-11-17 23:22:51.832198] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:28.211 Some configs were skipped because the RPC state that can call them passed over. 00:17:28.211 23:22:51 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:28.470 [2024-11-17 23:22:52.055994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.470 [2024-11-17 23:22:52.056121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:28.470 [2024-11-17 23:22:52.056179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.791 ms 00:17:28.470 [2024-11-17 23:22:52.056202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.470 [2024-11-17 23:22:52.056255] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.053 ms, result 0 00:17:28.470 true 00:17:28.470 23:22:52 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:28.470 [2024-11-17 23:22:52.255311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.470 [2024-11-17 23:22:52.255425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:28.470 [2024-11-17 23:22:52.255480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.912 ms 00:17:28.470 [2024-11-17 23:22:52.255504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.470 [2024-11-17 23:22:52.255571] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.169 ms, result 0 00:17:28.470 true 00:17:28.470 23:22:52 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 85299 00:17:28.470 23:22:52 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 85299 ']' 00:17:28.470 23:22:52 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 85299 00:17:28.470 23:22:52 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:17:28.470 23:22:52 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:28.470 23:22:52 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85299 00:17:28.730 killing process with pid 85299 00:17:28.730 23:22:52 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:28.730 23:22:52 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:28.730 23:22:52 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85299' 00:17:28.730 23:22:52 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 85299 00:17:28.730 23:22:52 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 85299 00:17:28.730 [2024-11-17 23:22:52.386263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.730 [2024-11-17 23:22:52.386314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:28.730 [2024-11-17 23:22:52.386328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:28.730 [2024-11-17 23:22:52.386335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.730 [2024-11-17 23:22:52.386360] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:28.730 [2024-11-17 23:22:52.386783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.730 [2024-11-17 23:22:52.386808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:28.730 [2024-11-17 23:22:52.386819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:17:28.730 [2024-11-17 23:22:52.386833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.730 [2024-11-17 23:22:52.387124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.730 [2024-11-17 23:22:52.387143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:28.730 [2024-11-17 23:22:52.387153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:17:28.730 [2024-11-17 23:22:52.387162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.730 [2024-11-17 23:22:52.391690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.730 [2024-11-17 23:22:52.391721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:28.730 [2024-11-17 23:22:52.391731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.509 ms 00:17:28.730 [2024-11-17 23:22:52.391746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.730 [2024-11-17 23:22:52.398744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.730 [2024-11-17 23:22:52.398774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:28.730 [2024-11-17 23:22:52.398784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.963 ms 00:17:28.730 [2024-11-17 23:22:52.398794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.730 [2024-11-17 23:22:52.400972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.730 [2024-11-17 23:22:52.401092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:28.730 [2024-11-17 23:22:52.401105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.107 ms 00:17:28.730 [2024-11-17 23:22:52.401114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.730 [2024-11-17 23:22:52.404669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.730 [2024-11-17 23:22:52.404776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:28.730 [2024-11-17 23:22:52.404790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.523 ms 00:17:28.730 [2024-11-17 23:22:52.404802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.730 [2024-11-17 23:22:52.404934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.730 [2024-11-17 23:22:52.404946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:28.730 [2024-11-17 23:22:52.404954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:17:28.730 [2024-11-17 23:22:52.404962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.730 [2024-11-17 23:22:52.407206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.730 [2024-11-17 23:22:52.407238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:28.730 [2024-11-17 23:22:52.407246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.227 ms 00:17:28.730 [2024-11-17 23:22:52.407259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.730 [2024-11-17 23:22:52.409478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.730 [2024-11-17 23:22:52.409510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:28.730 [2024-11-17 23:22:52.409518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.187 ms 00:17:28.730 [2024-11-17 23:22:52.409527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.730 [2024-11-17 23:22:52.411202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.730 [2024-11-17 23:22:52.411300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:28.731 [2024-11-17 23:22:52.411312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.644 ms 00:17:28.731 [2024-11-17 23:22:52.411321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.731 [2024-11-17 23:22:52.412952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.731 [2024-11-17 23:22:52.412986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:28.731 [2024-11-17 23:22:52.412994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.574 ms 00:17:28.731 [2024-11-17 23:22:52.413003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.731 [2024-11-17 23:22:52.413033] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:28.731 [2024-11-17 23:22:52.413049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:28.731 [2024-11-17 23:22:52.413734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:28.732 [2024-11-17 23:22:52.413742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:28.732 [2024-11-17 23:22:52.413750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:28.732 [2024-11-17 23:22:52.413758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:28.732 [2024-11-17 23:22:52.413765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:28.732 [2024-11-17 23:22:52.413774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:28.732 [2024-11-17 23:22:52.413781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:28.732 [2024-11-17 23:22:52.413791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:28.732 [2024-11-17 23:22:52.413798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:28.732 [2024-11-17 23:22:52.413807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:28.732 [2024-11-17 23:22:52.413814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:28.732 [2024-11-17 23:22:52.413823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:28.732 [2024-11-17 23:22:52.413831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:28.732 [2024-11-17 23:22:52.413840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:28.732 [2024-11-17 23:22:52.413847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:28.732 [2024-11-17 23:22:52.413858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:28.732 [2024-11-17 23:22:52.413865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:28.732 [2024-11-17 23:22:52.413895] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:28.732 [2024-11-17 23:22:52.413903] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: af926b10-56ee-4fe0-b2b0-4ca61c941c28 00:17:28.732 [2024-11-17 23:22:52.413912] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:28.732 [2024-11-17 23:22:52.413921] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:28.732 [2024-11-17 23:22:52.413930] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:28.732 [2024-11-17 23:22:52.413938] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:28.732 [2024-11-17 23:22:52.413946] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:28.732 [2024-11-17 23:22:52.413954] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:28.732 [2024-11-17 23:22:52.413965] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:28.732 [2024-11-17 23:22:52.413971] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:28.732 [2024-11-17 23:22:52.413979] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:28.732 [2024-11-17 23:22:52.413986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.732 [2024-11-17 23:22:52.413994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:28.732 [2024-11-17 23:22:52.414003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.954 ms 00:17:28.732 [2024-11-17 23:22:52.414014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.732 [2024-11-17 23:22:52.415372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.732 [2024-11-17 23:22:52.415394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:28.732 [2024-11-17 23:22:52.415403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.341 ms 00:17:28.732 [2024-11-17 23:22:52.415412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.732 [2024-11-17 23:22:52.415499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.732 [2024-11-17 23:22:52.415509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:28.732 [2024-11-17 23:22:52.415517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:17:28.732 [2024-11-17 23:22:52.415526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.732 [2024-11-17 23:22:52.420610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.732 [2024-11-17 23:22:52.420643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:28.732 [2024-11-17 23:22:52.420652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.732 [2024-11-17 23:22:52.420661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.732 [2024-11-17 23:22:52.420732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.732 [2024-11-17 23:22:52.420743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:28.732 [2024-11-17 23:22:52.420751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.732 [2024-11-17 23:22:52.420761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.732 [2024-11-17 23:22:52.420799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.732 [2024-11-17 23:22:52.420809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:28.732 [2024-11-17 23:22:52.420817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.732 [2024-11-17 23:22:52.420826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.732 [2024-11-17 23:22:52.420843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.732 [2024-11-17 23:22:52.420852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:28.732 [2024-11-17 23:22:52.420859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.732 [2024-11-17 23:22:52.420868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.732 [2024-11-17 23:22:52.429596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.732 [2024-11-17 23:22:52.429634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:28.732 [2024-11-17 23:22:52.429648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.732 [2024-11-17 23:22:52.429657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.732 [2024-11-17 23:22:52.436518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.732 [2024-11-17 23:22:52.436643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:28.732 [2024-11-17 23:22:52.436693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.732 [2024-11-17 23:22:52.436720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.732 [2024-11-17 23:22:52.436787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.732 [2024-11-17 23:22:52.436815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:28.732 [2024-11-17 23:22:52.436834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.732 [2024-11-17 23:22:52.436854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.732 [2024-11-17 23:22:52.436915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.732 [2024-11-17 23:22:52.436939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:28.732 [2024-11-17 23:22:52.436960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.732 [2024-11-17 23:22:52.437009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.732 [2024-11-17 23:22:52.437080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.732 [2024-11-17 23:22:52.437095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:28.732 [2024-11-17 23:22:52.437104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.732 [2024-11-17 23:22:52.437114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.732 [2024-11-17 23:22:52.437148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.732 [2024-11-17 23:22:52.437158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:28.732 [2024-11-17 23:22:52.437166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.732 [2024-11-17 23:22:52.437176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.732 [2024-11-17 23:22:52.437215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.732 [2024-11-17 23:22:52.437225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:28.732 [2024-11-17 23:22:52.437235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.732 [2024-11-17 23:22:52.437243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.732 [2024-11-17 23:22:52.437287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.732 [2024-11-17 23:22:52.437298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:28.732 [2024-11-17 23:22:52.437306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.732 [2024-11-17 23:22:52.437315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.732 [2024-11-17 23:22:52.437449] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 51.157 ms, result 0 00:17:28.990 23:22:52 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:28.990 [2024-11-17 23:22:52.658471] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:17:28.990 [2024-11-17 23:22:52.658699] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85335 ] 00:17:28.991 [2024-11-17 23:22:52.801636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.249 [2024-11-17 23:22:52.820286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.249 [2024-11-17 23:22:52.906120] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:29.249 [2024-11-17 23:22:52.906333] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:29.249 [2024-11-17 23:22:53.062045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.249 [2024-11-17 23:22:53.062087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:29.249 [2024-11-17 23:22:53.062100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:29.249 [2024-11-17 23:22:53.062108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.249 [2024-11-17 23:22:53.064319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.249 [2024-11-17 23:22:53.064352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:29.249 [2024-11-17 23:22:53.064361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.196 ms 00:17:29.249 [2024-11-17 23:22:53.064373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.249 [2024-11-17 23:22:53.064445] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:29.249 [2024-11-17 23:22:53.064663] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:29.249 [2024-11-17 23:22:53.064680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.249 [2024-11-17 23:22:53.064690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:29.249 [2024-11-17 23:22:53.064699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:17:29.249 [2024-11-17 23:22:53.064706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.249 [2024-11-17 23:22:53.065744] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:29.508 [2024-11-17 23:22:53.068041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.509 [2024-11-17 23:22:53.068075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:29.509 [2024-11-17 23:22:53.068088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.299 ms 00:17:29.509 [2024-11-17 23:22:53.068095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.509 [2024-11-17 23:22:53.068150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.509 [2024-11-17 23:22:53.068163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:29.509 [2024-11-17 23:22:53.068171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:29.509 [2024-11-17 23:22:53.068180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.509 [2024-11-17 23:22:53.072841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.509 [2024-11-17 23:22:53.072869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:29.509 [2024-11-17 23:22:53.072900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.622 ms 00:17:29.509 [2024-11-17 23:22:53.072908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.509 [2024-11-17 23:22:53.073003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.509 [2024-11-17 23:22:53.073014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:29.509 [2024-11-17 23:22:53.073022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:17:29.509 [2024-11-17 23:22:53.073029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.509 [2024-11-17 23:22:53.073056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.509 [2024-11-17 23:22:53.073064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:29.509 [2024-11-17 23:22:53.073072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:29.509 [2024-11-17 23:22:53.073083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.509 [2024-11-17 23:22:53.073105] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:29.509 [2024-11-17 23:22:53.074359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.509 [2024-11-17 23:22:53.074387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:29.509 [2024-11-17 23:22:53.074395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.261 ms 00:17:29.509 [2024-11-17 23:22:53.074407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.509 [2024-11-17 23:22:53.074441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.509 [2024-11-17 23:22:53.074454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:29.509 [2024-11-17 23:22:53.074461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:29.509 [2024-11-17 23:22:53.074468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.509 [2024-11-17 23:22:53.074485] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:29.509 [2024-11-17 23:22:53.074503] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:29.509 [2024-11-17 23:22:53.074540] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:29.509 [2024-11-17 23:22:53.074556] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:29.509 [2024-11-17 23:22:53.074656] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:29.509 [2024-11-17 23:22:53.074665] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:29.509 [2024-11-17 23:22:53.074675] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:29.509 [2024-11-17 23:22:53.074684] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:29.509 [2024-11-17 23:22:53.074693] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:29.509 [2024-11-17 23:22:53.074700] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:29.509 [2024-11-17 23:22:53.074707] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:29.509 [2024-11-17 23:22:53.074714] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:29.509 [2024-11-17 23:22:53.074720] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:29.509 [2024-11-17 23:22:53.074729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.509 [2024-11-17 23:22:53.074738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:29.509 [2024-11-17 23:22:53.074745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:17:29.509 [2024-11-17 23:22:53.074752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.509 [2024-11-17 23:22:53.074838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.509 [2024-11-17 23:22:53.074846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:29.509 [2024-11-17 23:22:53.074852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:29.509 [2024-11-17 23:22:53.074862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.509 [2024-11-17 23:22:53.074973] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:29.509 [2024-11-17 23:22:53.074983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:29.509 [2024-11-17 23:22:53.074996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:29.509 [2024-11-17 23:22:53.075003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.509 [2024-11-17 23:22:53.075031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:29.509 [2024-11-17 23:22:53.075039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:29.509 [2024-11-17 23:22:53.075047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:29.509 [2024-11-17 23:22:53.075056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:29.509 [2024-11-17 23:22:53.075066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:29.509 [2024-11-17 23:22:53.075073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:29.509 [2024-11-17 23:22:53.075081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:29.509 [2024-11-17 23:22:53.075088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:29.509 [2024-11-17 23:22:53.075095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:29.509 [2024-11-17 23:22:53.075103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:29.509 [2024-11-17 23:22:53.075111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:29.509 [2024-11-17 23:22:53.075118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.509 [2024-11-17 23:22:53.075125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:29.509 [2024-11-17 23:22:53.075134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:29.509 [2024-11-17 23:22:53.075142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.509 [2024-11-17 23:22:53.075149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:29.509 [2024-11-17 23:22:53.075157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:29.509 [2024-11-17 23:22:53.075164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:29.509 [2024-11-17 23:22:53.075172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:29.509 [2024-11-17 23:22:53.075179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:29.509 [2024-11-17 23:22:53.075190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:29.509 [2024-11-17 23:22:53.075201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:29.509 [2024-11-17 23:22:53.075209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:29.509 [2024-11-17 23:22:53.075216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:29.509 [2024-11-17 23:22:53.075223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:29.509 [2024-11-17 23:22:53.075231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:29.509 [2024-11-17 23:22:53.075238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:29.509 [2024-11-17 23:22:53.075245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:29.509 [2024-11-17 23:22:53.075252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:29.509 [2024-11-17 23:22:53.075259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:29.509 [2024-11-17 23:22:53.075267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:29.509 [2024-11-17 23:22:53.075274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:29.509 [2024-11-17 23:22:53.075281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:29.509 [2024-11-17 23:22:53.075289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:29.509 [2024-11-17 23:22:53.075297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:29.509 [2024-11-17 23:22:53.075304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.509 [2024-11-17 23:22:53.075313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:29.509 [2024-11-17 23:22:53.075320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:29.509 [2024-11-17 23:22:53.075327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.509 [2024-11-17 23:22:53.075335] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:29.509 [2024-11-17 23:22:53.075343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:29.509 [2024-11-17 23:22:53.075351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:29.509 [2024-11-17 23:22:53.075359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.509 [2024-11-17 23:22:53.075367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:29.509 [2024-11-17 23:22:53.075375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:29.509 [2024-11-17 23:22:53.075383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:29.509 [2024-11-17 23:22:53.075390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:29.509 [2024-11-17 23:22:53.075397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:29.509 [2024-11-17 23:22:53.075403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:29.509 [2024-11-17 23:22:53.075411] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:29.510 [2024-11-17 23:22:53.075419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:29.510 [2024-11-17 23:22:53.075427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:29.510 [2024-11-17 23:22:53.075436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:29.510 [2024-11-17 23:22:53.075443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:29.510 [2024-11-17 23:22:53.075450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:29.510 [2024-11-17 23:22:53.075457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:29.510 [2024-11-17 23:22:53.075464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:29.510 [2024-11-17 23:22:53.075470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:29.510 [2024-11-17 23:22:53.075477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:29.510 [2024-11-17 23:22:53.075484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:29.510 [2024-11-17 23:22:53.075496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:29.510 [2024-11-17 23:22:53.075503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:29.510 [2024-11-17 23:22:53.075509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:29.510 [2024-11-17 23:22:53.075517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:29.510 [2024-11-17 23:22:53.075524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:29.510 [2024-11-17 23:22:53.075531] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:29.510 [2024-11-17 23:22:53.075538] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:29.510 [2024-11-17 23:22:53.075548] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:29.510 [2024-11-17 23:22:53.075557] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:29.510 [2024-11-17 23:22:53.075564] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:29.510 [2024-11-17 23:22:53.075571] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:29.510 [2024-11-17 23:22:53.075578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.510 [2024-11-17 23:22:53.075586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:29.510 [2024-11-17 23:22:53.075593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.673 ms 00:17:29.510 [2024-11-17 23:22:53.075599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.510 [2024-11-17 23:22:53.083995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.510 [2024-11-17 23:22:53.084027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:29.510 [2024-11-17 23:22:53.084042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.344 ms 00:17:29.510 [2024-11-17 23:22:53.084049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.510 [2024-11-17 23:22:53.084161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.510 [2024-11-17 23:22:53.084170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:29.510 [2024-11-17 23:22:53.084180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:17:29.510 [2024-11-17 23:22:53.084187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.510 [2024-11-17 23:22:53.101175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.510 [2024-11-17 23:22:53.101217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:29.510 [2024-11-17 23:22:53.101231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.963 ms 00:17:29.510 [2024-11-17 23:22:53.101241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.510 [2024-11-17 23:22:53.101333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.510 [2024-11-17 23:22:53.101353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:29.510 [2024-11-17 23:22:53.101364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:29.510 [2024-11-17 23:22:53.101372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.510 [2024-11-17 23:22:53.101696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.510 [2024-11-17 23:22:53.101719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:29.510 [2024-11-17 23:22:53.101730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:17:29.510 [2024-11-17 23:22:53.101739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.510 [2024-11-17 23:22:53.101907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.510 [2024-11-17 23:22:53.101924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:29.510 [2024-11-17 23:22:53.101941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:17:29.510 [2024-11-17 23:22:53.101951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.510 [2024-11-17 23:22:53.107518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.510 [2024-11-17 23:22:53.107550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:29.510 [2024-11-17 23:22:53.107561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.542 ms 00:17:29.510 [2024-11-17 23:22:53.107573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.510 [2024-11-17 23:22:53.110225] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:29.510 [2024-11-17 23:22:53.110256] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:29.510 [2024-11-17 23:22:53.110267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.510 [2024-11-17 23:22:53.110275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:29.510 [2024-11-17 23:22:53.110282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.573 ms 00:17:29.510 [2024-11-17 23:22:53.110290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.510 [2024-11-17 23:22:53.124742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.510 [2024-11-17 23:22:53.124773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:29.510 [2024-11-17 23:22:53.124783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.410 ms 00:17:29.510 [2024-11-17 23:22:53.124790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.510 [2024-11-17 23:22:53.126728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.510 [2024-11-17 23:22:53.126758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:29.510 [2024-11-17 23:22:53.126766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.874 ms 00:17:29.510 [2024-11-17 23:22:53.126773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.510 [2024-11-17 23:22:53.128550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.510 [2024-11-17 23:22:53.128585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:29.510 [2024-11-17 23:22:53.128594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.743 ms 00:17:29.510 [2024-11-17 23:22:53.128600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.510 [2024-11-17 23:22:53.128922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.510 [2024-11-17 23:22:53.128935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:29.510 [2024-11-17 23:22:53.128948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:17:29.510 [2024-11-17 23:22:53.128955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.510 [2024-11-17 23:22:53.144053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.510 [2024-11-17 23:22:53.144094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:29.510 [2024-11-17 23:22:53.144105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.064 ms 00:17:29.510 [2024-11-17 23:22:53.144113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.510 [2024-11-17 23:22:53.151388] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:29.510 [2024-11-17 23:22:53.165075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.510 [2024-11-17 23:22:53.165106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:29.510 [2024-11-17 23:22:53.165117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.906 ms 00:17:29.510 [2024-11-17 23:22:53.165132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.510 [2024-11-17 23:22:53.165221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.510 [2024-11-17 23:22:53.165231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:29.510 [2024-11-17 23:22:53.165244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:29.510 [2024-11-17 23:22:53.165254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.510 [2024-11-17 23:22:53.165298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.510 [2024-11-17 23:22:53.165307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:29.510 [2024-11-17 23:22:53.165318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:17:29.510 [2024-11-17 23:22:53.165330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.510 [2024-11-17 23:22:53.165350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.510 [2024-11-17 23:22:53.165358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:29.510 [2024-11-17 23:22:53.165365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:29.510 [2024-11-17 23:22:53.165372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.510 [2024-11-17 23:22:53.165403] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:29.510 [2024-11-17 23:22:53.165413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.510 [2024-11-17 23:22:53.165420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:29.510 [2024-11-17 23:22:53.165427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:29.510 [2024-11-17 23:22:53.165438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.510 [2024-11-17 23:22:53.168980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.511 [2024-11-17 23:22:53.169012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:29.511 [2024-11-17 23:22:53.169022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.524 ms 00:17:29.511 [2024-11-17 23:22:53.169029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.511 [2024-11-17 23:22:53.169105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.511 [2024-11-17 23:22:53.169115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:29.511 [2024-11-17 23:22:53.169123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:29.511 [2024-11-17 23:22:53.169130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.511 [2024-11-17 23:22:53.169915] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:29.511 [2024-11-17 23:22:53.170854] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 107.578 ms, result 0 00:17:29.511 [2024-11-17 23:22:53.171863] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:29.511 [2024-11-17 23:22:53.181730] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:30.445  [2024-11-17T23:22:55.653Z] Copying: 22/256 [MB] (22 MBps) [2024-11-17T23:22:56.273Z] Copying: 44/256 [MB] (21 MBps) [2024-11-17T23:22:57.664Z] Copying: 69/256 [MB] (25 MBps) [2024-11-17T23:22:58.607Z] Copying: 80/256 [MB] (10 MBps) [2024-11-17T23:22:59.547Z] Copying: 95/256 [MB] (15 MBps) [2024-11-17T23:23:00.492Z] Copying: 113/256 [MB] (17 MBps) [2024-11-17T23:23:01.433Z] Copying: 133/256 [MB] (20 MBps) [2024-11-17T23:23:02.374Z] Copying: 155/256 [MB] (21 MBps) [2024-11-17T23:23:03.341Z] Copying: 175/256 [MB] (20 MBps) [2024-11-17T23:23:04.286Z] Copying: 196/256 [MB] (20 MBps) [2024-11-17T23:23:05.676Z] Copying: 214/256 [MB] (18 MBps) [2024-11-17T23:23:06.246Z] Copying: 236/256 [MB] (21 MBps) [2024-11-17T23:23:06.508Z] Copying: 253/256 [MB] (17 MBps) [2024-11-17T23:23:06.771Z] Copying: 256/256 [MB] (average 19 MBps)[2024-11-17 23:23:06.585540] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:42.950 [2024-11-17 23:23:06.588285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.950 [2024-11-17 23:23:06.588370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:42.950 [2024-11-17 23:23:06.588397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:42.950 [2024-11-17 23:23:06.588415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.950 [2024-11-17 23:23:06.588460] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:42.950 [2024-11-17 23:23:06.589305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.950 [2024-11-17 23:23:06.589352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:42.950 [2024-11-17 23:23:06.589374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.817 ms 00:17:42.950 [2024-11-17 23:23:06.589392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.950 [2024-11-17 23:23:06.589978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.950 [2024-11-17 23:23:06.590020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:42.950 [2024-11-17 23:23:06.590038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:17:42.950 [2024-11-17 23:23:06.590059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.950 [2024-11-17 23:23:06.597446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.950 [2024-11-17 23:23:06.597493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:42.950 [2024-11-17 23:23:06.597504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.353 ms 00:17:42.950 [2024-11-17 23:23:06.597513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.950 [2024-11-17 23:23:06.604550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.950 [2024-11-17 23:23:06.604593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:42.950 [2024-11-17 23:23:06.604604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.987 ms 00:17:42.950 [2024-11-17 23:23:06.604620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.950 [2024-11-17 23:23:06.607383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.950 [2024-11-17 23:23:06.607434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:42.950 [2024-11-17 23:23:06.607444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.711 ms 00:17:42.950 [2024-11-17 23:23:06.607452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.950 [2024-11-17 23:23:06.611462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.950 [2024-11-17 23:23:06.611525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:42.950 [2024-11-17 23:23:06.611536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.962 ms 00:17:42.950 [2024-11-17 23:23:06.611544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.950 [2024-11-17 23:23:06.611693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.950 [2024-11-17 23:23:06.611704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:42.950 [2024-11-17 23:23:06.611713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:17:42.950 [2024-11-17 23:23:06.611728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.950 [2024-11-17 23:23:06.614284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.950 [2024-11-17 23:23:06.614472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:42.950 [2024-11-17 23:23:06.614490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.532 ms 00:17:42.950 [2024-11-17 23:23:06.614498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.950 [2024-11-17 23:23:06.616548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.950 [2024-11-17 23:23:06.616602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:42.950 [2024-11-17 23:23:06.616612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.006 ms 00:17:42.950 [2024-11-17 23:23:06.616620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.950 [2024-11-17 23:23:06.618310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.950 [2024-11-17 23:23:06.618355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:42.950 [2024-11-17 23:23:06.618366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.646 ms 00:17:42.950 [2024-11-17 23:23:06.618373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.950 [2024-11-17 23:23:06.620032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.950 [2024-11-17 23:23:06.620076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:42.950 [2024-11-17 23:23:06.620085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.579 ms 00:17:42.950 [2024-11-17 23:23:06.620093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.950 [2024-11-17 23:23:06.620138] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:42.950 [2024-11-17 23:23:06.620154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:42.950 [2024-11-17 23:23:06.620165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:42.950 [2024-11-17 23:23:06.620173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:42.950 [2024-11-17 23:23:06.620181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:42.951 [2024-11-17 23:23:06.620929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:42.952 [2024-11-17 23:23:06.620938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:42.952 [2024-11-17 23:23:06.620946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:42.952 [2024-11-17 23:23:06.620954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:42.952 [2024-11-17 23:23:06.620963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:42.952 [2024-11-17 23:23:06.620971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:42.952 [2024-11-17 23:23:06.620979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:42.952 [2024-11-17 23:23:06.620996] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:42.952 [2024-11-17 23:23:06.621005] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: af926b10-56ee-4fe0-b2b0-4ca61c941c28 00:17:42.952 [2024-11-17 23:23:06.621018] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:42.952 [2024-11-17 23:23:06.621045] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:42.952 [2024-11-17 23:23:06.621054] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:42.952 [2024-11-17 23:23:06.621062] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:42.952 [2024-11-17 23:23:06.621070] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:42.952 [2024-11-17 23:23:06.621078] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:42.952 [2024-11-17 23:23:06.621087] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:42.952 [2024-11-17 23:23:06.621094] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:42.952 [2024-11-17 23:23:06.621101] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:42.952 [2024-11-17 23:23:06.621109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.952 [2024-11-17 23:23:06.621120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:42.952 [2024-11-17 23:23:06.621129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.972 ms 00:17:42.952 [2024-11-17 23:23:06.621137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.952 [2024-11-17 23:23:06.623434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.952 [2024-11-17 23:23:06.623467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:42.952 [2024-11-17 23:23:06.623478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.277 ms 00:17:42.952 [2024-11-17 23:23:06.623487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.952 [2024-11-17 23:23:06.623615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.952 [2024-11-17 23:23:06.623624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:42.952 [2024-11-17 23:23:06.623639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:17:42.952 [2024-11-17 23:23:06.623647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.952 [2024-11-17 23:23:06.631610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.952 [2024-11-17 23:23:06.631786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:42.952 [2024-11-17 23:23:06.631846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.952 [2024-11-17 23:23:06.631873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.952 [2024-11-17 23:23:06.631996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.952 [2024-11-17 23:23:06.632020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:42.952 [2024-11-17 23:23:06.632227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.952 [2024-11-17 23:23:06.632269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.952 [2024-11-17 23:23:06.632347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.952 [2024-11-17 23:23:06.632386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:42.952 [2024-11-17 23:23:06.632407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.952 [2024-11-17 23:23:06.632469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.952 [2024-11-17 23:23:06.632508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.952 [2024-11-17 23:23:06.632539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:42.952 [2024-11-17 23:23:06.632563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.952 [2024-11-17 23:23:06.632582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.952 [2024-11-17 23:23:06.646204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.952 [2024-11-17 23:23:06.646416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:42.952 [2024-11-17 23:23:06.646479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.952 [2024-11-17 23:23:06.646505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.952 [2024-11-17 23:23:06.656593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.952 [2024-11-17 23:23:06.656771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:42.952 [2024-11-17 23:23:06.656826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.952 [2024-11-17 23:23:06.656851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.952 [2024-11-17 23:23:06.656929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.952 [2024-11-17 23:23:06.656954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:42.952 [2024-11-17 23:23:06.656975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.952 [2024-11-17 23:23:06.656995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.952 [2024-11-17 23:23:06.657038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.952 [2024-11-17 23:23:06.657128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:42.952 [2024-11-17 23:23:06.657145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.952 [2024-11-17 23:23:06.657153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.952 [2024-11-17 23:23:06.657239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.952 [2024-11-17 23:23:06.657250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:42.952 [2024-11-17 23:23:06.657258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.952 [2024-11-17 23:23:06.657266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.952 [2024-11-17 23:23:06.657306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.952 [2024-11-17 23:23:06.657322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:42.952 [2024-11-17 23:23:06.657333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.952 [2024-11-17 23:23:06.657340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.952 [2024-11-17 23:23:06.657385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.952 [2024-11-17 23:23:06.657395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:42.952 [2024-11-17 23:23:06.657403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.952 [2024-11-17 23:23:06.657411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.952 [2024-11-17 23:23:06.657464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.952 [2024-11-17 23:23:06.657478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:42.952 [2024-11-17 23:23:06.657493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.952 [2024-11-17 23:23:06.657501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.952 [2024-11-17 23:23:06.657657] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 69.355 ms, result 0 00:17:43.213 00:17:43.213 00:17:43.213 23:23:06 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:43.785 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:17:43.785 23:23:07 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:43.785 23:23:07 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:17:43.785 23:23:07 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:43.785 23:23:07 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:43.785 23:23:07 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:17:43.785 23:23:07 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:43.785 23:23:07 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 85299 00:17:43.785 23:23:07 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 85299 ']' 00:17:43.785 23:23:07 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 85299 00:17:43.785 Process with pid 85299 is not found 00:17:43.785 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (85299) - No such process 00:17:43.785 23:23:07 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 85299 is not found' 00:17:43.785 ************************************ 00:17:43.785 END TEST ftl_trim 00:17:43.785 ************************************ 00:17:43.785 00:17:43.785 real 1m5.747s 00:17:43.785 user 1m26.053s 00:17:43.785 sys 0m5.257s 00:17:43.785 23:23:07 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:43.785 23:23:07 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:43.785 23:23:07 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:43.785 23:23:07 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:43.785 23:23:07 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:43.785 23:23:07 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:43.785 ************************************ 00:17:43.785 START TEST ftl_restore 00:17:43.785 ************************************ 00:17:43.785 23:23:07 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:44.047 * Looking for test storage... 00:17:44.047 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:44.047 23:23:07 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:44.047 23:23:07 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:17:44.047 23:23:07 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:44.047 23:23:07 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:44.047 23:23:07 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:44.047 23:23:07 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:44.047 23:23:07 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:44.047 23:23:07 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:17:44.047 23:23:07 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:17:44.047 23:23:07 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:17:44.047 23:23:07 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:17:44.047 23:23:07 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:17:44.047 23:23:07 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:17:44.047 23:23:07 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:17:44.047 23:23:07 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:44.047 23:23:07 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:17:44.047 23:23:07 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:17:44.047 23:23:07 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:44.047 23:23:07 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:44.047 23:23:07 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:17:44.047 23:23:07 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:17:44.047 23:23:07 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:44.047 23:23:07 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:17:44.047 23:23:07 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:17:44.047 23:23:07 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:17:44.047 23:23:07 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:17:44.047 23:23:07 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:44.047 23:23:07 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:17:44.047 23:23:07 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:17:44.047 23:23:07 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:44.047 23:23:07 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:44.047 23:23:07 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:17:44.047 23:23:07 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:44.047 23:23:07 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:44.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.047 --rc genhtml_branch_coverage=1 00:17:44.047 --rc genhtml_function_coverage=1 00:17:44.047 --rc genhtml_legend=1 00:17:44.047 --rc geninfo_all_blocks=1 00:17:44.047 --rc geninfo_unexecuted_blocks=1 00:17:44.047 00:17:44.047 ' 00:17:44.047 23:23:07 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:44.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.047 --rc genhtml_branch_coverage=1 00:17:44.047 --rc genhtml_function_coverage=1 00:17:44.047 --rc genhtml_legend=1 00:17:44.047 --rc geninfo_all_blocks=1 00:17:44.047 --rc geninfo_unexecuted_blocks=1 00:17:44.047 00:17:44.047 ' 00:17:44.047 23:23:07 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:44.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.047 --rc genhtml_branch_coverage=1 00:17:44.047 --rc genhtml_function_coverage=1 00:17:44.047 --rc genhtml_legend=1 00:17:44.047 --rc geninfo_all_blocks=1 00:17:44.047 --rc geninfo_unexecuted_blocks=1 00:17:44.047 00:17:44.047 ' 00:17:44.047 23:23:07 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:44.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.047 --rc genhtml_branch_coverage=1 00:17:44.047 --rc genhtml_function_coverage=1 00:17:44.047 --rc genhtml_legend=1 00:17:44.047 --rc geninfo_all_blocks=1 00:17:44.047 --rc geninfo_unexecuted_blocks=1 00:17:44.047 00:17:44.047 ' 00:17:44.047 23:23:07 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:44.047 23:23:07 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:17:44.047 23:23:07 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:44.047 23:23:07 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:44.047 23:23:07 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:44.047 23:23:07 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:44.047 23:23:07 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:44.047 23:23:07 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:44.047 23:23:07 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:44.047 23:23:07 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:44.047 23:23:07 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:44.047 23:23:07 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:44.047 23:23:07 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:44.047 23:23:07 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:44.047 23:23:07 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:44.047 23:23:07 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:44.047 23:23:07 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:44.047 23:23:07 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:44.047 23:23:07 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:44.047 23:23:07 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:44.047 23:23:07 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:44.047 23:23:07 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:44.047 23:23:07 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:44.047 23:23:07 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:44.047 23:23:07 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:44.047 23:23:07 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:44.047 23:23:07 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:44.047 23:23:07 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:44.047 23:23:07 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:44.047 23:23:07 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:44.047 23:23:07 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:17:44.047 23:23:07 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.uL2bzwWI30 00:17:44.047 23:23:07 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:44.047 23:23:07 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:17:44.047 23:23:07 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:17:44.047 23:23:07 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:44.047 23:23:07 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:17:44.047 23:23:07 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:17:44.047 23:23:07 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:17:44.047 23:23:07 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:17:44.047 23:23:07 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=85558 00:17:44.047 23:23:07 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 85558 00:17:44.047 23:23:07 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 85558 ']' 00:17:44.048 23:23:07 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.048 23:23:07 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:44.048 23:23:07 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.048 23:23:07 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:44.048 23:23:07 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:17:44.048 23:23:07 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:44.048 [2024-11-17 23:23:07.841160] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:17:44.048 [2024-11-17 23:23:07.841292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85558 ] 00:17:44.313 [2024-11-17 23:23:07.989949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.313 [2024-11-17 23:23:08.018977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.886 23:23:08 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:44.886 23:23:08 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:17:44.886 23:23:08 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:44.886 23:23:08 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:17:44.886 23:23:08 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:44.886 23:23:08 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:17:44.886 23:23:08 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:17:44.886 23:23:08 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:45.459 23:23:08 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:45.459 23:23:08 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:17:45.459 23:23:08 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:45.459 23:23:08 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:17:45.459 23:23:08 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:45.459 23:23:08 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:17:45.459 23:23:08 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:17:45.459 23:23:08 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:45.459 23:23:09 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:45.459 { 00:17:45.459 "name": "nvme0n1", 00:17:45.459 "aliases": [ 00:17:45.459 "40e52431-3339-4666-a0d9-149e5a674fec" 00:17:45.459 ], 00:17:45.459 "product_name": "NVMe disk", 00:17:45.459 "block_size": 4096, 00:17:45.459 "num_blocks": 1310720, 00:17:45.459 "uuid": "40e52431-3339-4666-a0d9-149e5a674fec", 00:17:45.459 "numa_id": -1, 00:17:45.459 "assigned_rate_limits": { 00:17:45.459 "rw_ios_per_sec": 0, 00:17:45.459 "rw_mbytes_per_sec": 0, 00:17:45.459 "r_mbytes_per_sec": 0, 00:17:45.459 "w_mbytes_per_sec": 0 00:17:45.459 }, 00:17:45.459 "claimed": true, 00:17:45.459 "claim_type": "read_many_write_one", 00:17:45.459 "zoned": false, 00:17:45.459 "supported_io_types": { 00:17:45.459 "read": true, 00:17:45.459 "write": true, 00:17:45.459 "unmap": true, 00:17:45.459 "flush": true, 00:17:45.459 "reset": true, 00:17:45.459 "nvme_admin": true, 00:17:45.459 "nvme_io": true, 00:17:45.459 "nvme_io_md": false, 00:17:45.459 "write_zeroes": true, 00:17:45.459 "zcopy": false, 00:17:45.459 "get_zone_info": false, 00:17:45.459 "zone_management": false, 00:17:45.459 "zone_append": false, 00:17:45.459 "compare": true, 00:17:45.459 "compare_and_write": false, 00:17:45.459 "abort": true, 00:17:45.459 "seek_hole": false, 00:17:45.459 "seek_data": false, 00:17:45.459 "copy": true, 00:17:45.459 "nvme_iov_md": false 00:17:45.459 }, 00:17:45.459 "driver_specific": { 00:17:45.459 "nvme": [ 00:17:45.459 { 00:17:45.459 "pci_address": "0000:00:11.0", 00:17:45.459 "trid": { 00:17:45.459 "trtype": "PCIe", 00:17:45.459 "traddr": "0000:00:11.0" 00:17:45.459 }, 00:17:45.459 "ctrlr_data": { 00:17:45.459 "cntlid": 0, 00:17:45.459 "vendor_id": "0x1b36", 00:17:45.459 "model_number": "QEMU NVMe Ctrl", 00:17:45.459 "serial_number": "12341", 00:17:45.459 "firmware_revision": "8.0.0", 00:17:45.459 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:45.459 "oacs": { 00:17:45.459 "security": 0, 00:17:45.459 "format": 1, 00:17:45.459 "firmware": 0, 00:17:45.459 "ns_manage": 1 00:17:45.459 }, 00:17:45.459 "multi_ctrlr": false, 00:17:45.459 "ana_reporting": false 00:17:45.459 }, 00:17:45.459 "vs": { 00:17:45.459 "nvme_version": "1.4" 00:17:45.459 }, 00:17:45.459 "ns_data": { 00:17:45.459 "id": 1, 00:17:45.459 "can_share": false 00:17:45.459 } 00:17:45.459 } 00:17:45.459 ], 00:17:45.459 "mp_policy": "active_passive" 00:17:45.459 } 00:17:45.459 } 00:17:45.459 ]' 00:17:45.459 23:23:09 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:45.459 23:23:09 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:17:45.460 23:23:09 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:45.460 23:23:09 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:17:45.460 23:23:09 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:17:45.460 23:23:09 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:17:45.460 23:23:09 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:17:45.460 23:23:09 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:45.460 23:23:09 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:17:45.460 23:23:09 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:45.460 23:23:09 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:45.721 23:23:09 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=7b04df80-bda7-404d-9b51-ea9e73ca0423 00:17:45.721 23:23:09 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:17:45.721 23:23:09 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7b04df80-bda7-404d-9b51-ea9e73ca0423 00:17:45.984 23:23:09 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:46.247 23:23:09 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=0c188621-f352-4b44-8886-176e8a506b51 00:17:46.247 23:23:09 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0c188621-f352-4b44-8886-176e8a506b51 00:17:46.509 23:23:10 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=0c489bdc-94f6-47c9-8398-c5571cf7ee18 00:17:46.509 23:23:10 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:17:46.509 23:23:10 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0c489bdc-94f6-47c9-8398-c5571cf7ee18 00:17:46.509 23:23:10 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:17:46.509 23:23:10 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:46.509 23:23:10 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=0c489bdc-94f6-47c9-8398-c5571cf7ee18 00:17:46.509 23:23:10 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:17:46.509 23:23:10 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 0c489bdc-94f6-47c9-8398-c5571cf7ee18 00:17:46.509 23:23:10 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=0c489bdc-94f6-47c9-8398-c5571cf7ee18 00:17:46.509 23:23:10 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:46.509 23:23:10 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:17:46.509 23:23:10 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:17:46.509 23:23:10 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0c489bdc-94f6-47c9-8398-c5571cf7ee18 00:17:46.770 23:23:10 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:46.770 { 00:17:46.770 "name": "0c489bdc-94f6-47c9-8398-c5571cf7ee18", 00:17:46.770 "aliases": [ 00:17:46.770 "lvs/nvme0n1p0" 00:17:46.770 ], 00:17:46.770 "product_name": "Logical Volume", 00:17:46.770 "block_size": 4096, 00:17:46.770 "num_blocks": 26476544, 00:17:46.770 "uuid": "0c489bdc-94f6-47c9-8398-c5571cf7ee18", 00:17:46.770 "assigned_rate_limits": { 00:17:46.770 "rw_ios_per_sec": 0, 00:17:46.770 "rw_mbytes_per_sec": 0, 00:17:46.770 "r_mbytes_per_sec": 0, 00:17:46.770 "w_mbytes_per_sec": 0 00:17:46.770 }, 00:17:46.770 "claimed": false, 00:17:46.770 "zoned": false, 00:17:46.770 "supported_io_types": { 00:17:46.770 "read": true, 00:17:46.770 "write": true, 00:17:46.770 "unmap": true, 00:17:46.770 "flush": false, 00:17:46.770 "reset": true, 00:17:46.770 "nvme_admin": false, 00:17:46.770 "nvme_io": false, 00:17:46.770 "nvme_io_md": false, 00:17:46.770 "write_zeroes": true, 00:17:46.770 "zcopy": false, 00:17:46.770 "get_zone_info": false, 00:17:46.770 "zone_management": false, 00:17:46.770 "zone_append": false, 00:17:46.770 "compare": false, 00:17:46.770 "compare_and_write": false, 00:17:46.770 "abort": false, 00:17:46.770 "seek_hole": true, 00:17:46.770 "seek_data": true, 00:17:46.770 "copy": false, 00:17:46.770 "nvme_iov_md": false 00:17:46.770 }, 00:17:46.770 "driver_specific": { 00:17:46.770 "lvol": { 00:17:46.770 "lvol_store_uuid": "0c188621-f352-4b44-8886-176e8a506b51", 00:17:46.770 "base_bdev": "nvme0n1", 00:17:46.770 "thin_provision": true, 00:17:46.770 "num_allocated_clusters": 0, 00:17:46.770 "snapshot": false, 00:17:46.770 "clone": false, 00:17:46.770 "esnap_clone": false 00:17:46.770 } 00:17:46.770 } 00:17:46.770 } 00:17:46.770 ]' 00:17:46.770 23:23:10 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:46.770 23:23:10 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:17:46.771 23:23:10 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:46.771 23:23:10 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:46.771 23:23:10 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:46.771 23:23:10 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:17:46.771 23:23:10 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:17:46.771 23:23:10 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:17:46.771 23:23:10 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:47.032 23:23:10 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:47.032 23:23:10 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:47.032 23:23:10 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 0c489bdc-94f6-47c9-8398-c5571cf7ee18 00:17:47.032 23:23:10 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=0c489bdc-94f6-47c9-8398-c5571cf7ee18 00:17:47.032 23:23:10 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:47.032 23:23:10 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:17:47.032 23:23:10 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:17:47.032 23:23:10 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0c489bdc-94f6-47c9-8398-c5571cf7ee18 00:17:47.292 23:23:10 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:47.292 { 00:17:47.292 "name": "0c489bdc-94f6-47c9-8398-c5571cf7ee18", 00:17:47.292 "aliases": [ 00:17:47.292 "lvs/nvme0n1p0" 00:17:47.292 ], 00:17:47.292 "product_name": "Logical Volume", 00:17:47.292 "block_size": 4096, 00:17:47.292 "num_blocks": 26476544, 00:17:47.292 "uuid": "0c489bdc-94f6-47c9-8398-c5571cf7ee18", 00:17:47.292 "assigned_rate_limits": { 00:17:47.292 "rw_ios_per_sec": 0, 00:17:47.292 "rw_mbytes_per_sec": 0, 00:17:47.292 "r_mbytes_per_sec": 0, 00:17:47.292 "w_mbytes_per_sec": 0 00:17:47.292 }, 00:17:47.292 "claimed": false, 00:17:47.292 "zoned": false, 00:17:47.292 "supported_io_types": { 00:17:47.292 "read": true, 00:17:47.292 "write": true, 00:17:47.292 "unmap": true, 00:17:47.292 "flush": false, 00:17:47.292 "reset": true, 00:17:47.292 "nvme_admin": false, 00:17:47.292 "nvme_io": false, 00:17:47.292 "nvme_io_md": false, 00:17:47.292 "write_zeroes": true, 00:17:47.292 "zcopy": false, 00:17:47.292 "get_zone_info": false, 00:17:47.292 "zone_management": false, 00:17:47.292 "zone_append": false, 00:17:47.292 "compare": false, 00:17:47.292 "compare_and_write": false, 00:17:47.292 "abort": false, 00:17:47.292 "seek_hole": true, 00:17:47.292 "seek_data": true, 00:17:47.292 "copy": false, 00:17:47.292 "nvme_iov_md": false 00:17:47.292 }, 00:17:47.292 "driver_specific": { 00:17:47.292 "lvol": { 00:17:47.292 "lvol_store_uuid": "0c188621-f352-4b44-8886-176e8a506b51", 00:17:47.292 "base_bdev": "nvme0n1", 00:17:47.292 "thin_provision": true, 00:17:47.292 "num_allocated_clusters": 0, 00:17:47.292 "snapshot": false, 00:17:47.292 "clone": false, 00:17:47.292 "esnap_clone": false 00:17:47.292 } 00:17:47.292 } 00:17:47.292 } 00:17:47.293 ]' 00:17:47.293 23:23:10 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:47.293 23:23:10 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:17:47.293 23:23:10 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:47.293 23:23:10 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:47.293 23:23:10 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:47.293 23:23:10 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:17:47.293 23:23:10 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:17:47.293 23:23:10 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:47.554 23:23:11 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:17:47.554 23:23:11 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 0c489bdc-94f6-47c9-8398-c5571cf7ee18 00:17:47.554 23:23:11 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=0c489bdc-94f6-47c9-8398-c5571cf7ee18 00:17:47.554 23:23:11 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:47.554 23:23:11 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:17:47.554 23:23:11 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:17:47.554 23:23:11 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0c489bdc-94f6-47c9-8398-c5571cf7ee18 00:17:47.815 23:23:11 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:47.815 { 00:17:47.815 "name": "0c489bdc-94f6-47c9-8398-c5571cf7ee18", 00:17:47.815 "aliases": [ 00:17:47.815 "lvs/nvme0n1p0" 00:17:47.815 ], 00:17:47.815 "product_name": "Logical Volume", 00:17:47.815 "block_size": 4096, 00:17:47.815 "num_blocks": 26476544, 00:17:47.815 "uuid": "0c489bdc-94f6-47c9-8398-c5571cf7ee18", 00:17:47.815 "assigned_rate_limits": { 00:17:47.815 "rw_ios_per_sec": 0, 00:17:47.815 "rw_mbytes_per_sec": 0, 00:17:47.815 "r_mbytes_per_sec": 0, 00:17:47.815 "w_mbytes_per_sec": 0 00:17:47.815 }, 00:17:47.815 "claimed": false, 00:17:47.815 "zoned": false, 00:17:47.815 "supported_io_types": { 00:17:47.815 "read": true, 00:17:47.815 "write": true, 00:17:47.815 "unmap": true, 00:17:47.815 "flush": false, 00:17:47.815 "reset": true, 00:17:47.815 "nvme_admin": false, 00:17:47.815 "nvme_io": false, 00:17:47.815 "nvme_io_md": false, 00:17:47.815 "write_zeroes": true, 00:17:47.815 "zcopy": false, 00:17:47.815 "get_zone_info": false, 00:17:47.815 "zone_management": false, 00:17:47.815 "zone_append": false, 00:17:47.815 "compare": false, 00:17:47.815 "compare_and_write": false, 00:17:47.815 "abort": false, 00:17:47.815 "seek_hole": true, 00:17:47.815 "seek_data": true, 00:17:47.815 "copy": false, 00:17:47.815 "nvme_iov_md": false 00:17:47.815 }, 00:17:47.815 "driver_specific": { 00:17:47.815 "lvol": { 00:17:47.815 "lvol_store_uuid": "0c188621-f352-4b44-8886-176e8a506b51", 00:17:47.815 "base_bdev": "nvme0n1", 00:17:47.815 "thin_provision": true, 00:17:47.815 "num_allocated_clusters": 0, 00:17:47.815 "snapshot": false, 00:17:47.815 "clone": false, 00:17:47.815 "esnap_clone": false 00:17:47.815 } 00:17:47.815 } 00:17:47.815 } 00:17:47.815 ]' 00:17:47.815 23:23:11 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:47.815 23:23:11 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:17:47.815 23:23:11 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:47.815 23:23:11 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:47.815 23:23:11 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:47.815 23:23:11 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:17:47.815 23:23:11 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:17:47.815 23:23:11 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 0c489bdc-94f6-47c9-8398-c5571cf7ee18 --l2p_dram_limit 10' 00:17:47.815 23:23:11 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:17:47.815 23:23:11 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:47.815 23:23:11 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:17:47.815 23:23:11 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:17:47.815 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:17:47.815 23:23:11 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0c489bdc-94f6-47c9-8398-c5571cf7ee18 --l2p_dram_limit 10 -c nvc0n1p0 00:17:48.078 [2024-11-17 23:23:11.657131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.078 [2024-11-17 23:23:11.657166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:48.078 [2024-11-17 23:23:11.657176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:48.078 [2024-11-17 23:23:11.657185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.078 [2024-11-17 23:23:11.657228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.078 [2024-11-17 23:23:11.657237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:48.078 [2024-11-17 23:23:11.657246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:48.078 [2024-11-17 23:23:11.657254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.078 [2024-11-17 23:23:11.657269] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:48.078 [2024-11-17 23:23:11.657490] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:48.078 [2024-11-17 23:23:11.657503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.078 [2024-11-17 23:23:11.657512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:48.078 [2024-11-17 23:23:11.657519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:17:48.078 [2024-11-17 23:23:11.657526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.078 [2024-11-17 23:23:11.657551] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e1a16e31-7428-4c79-b749-82ad6e64cef8 00:17:48.078 [2024-11-17 23:23:11.658498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.078 [2024-11-17 23:23:11.658516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:48.078 [2024-11-17 23:23:11.658526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:48.078 [2024-11-17 23:23:11.658533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.078 [2024-11-17 23:23:11.663208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.078 [2024-11-17 23:23:11.663230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:48.078 [2024-11-17 23:23:11.663240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.633 ms 00:17:48.078 [2024-11-17 23:23:11.663247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.078 [2024-11-17 23:23:11.663436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.078 [2024-11-17 23:23:11.663456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:48.078 [2024-11-17 23:23:11.663464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:17:48.078 [2024-11-17 23:23:11.663470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.078 [2024-11-17 23:23:11.663508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.078 [2024-11-17 23:23:11.663516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:48.078 [2024-11-17 23:23:11.663523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:48.078 [2024-11-17 23:23:11.663535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.078 [2024-11-17 23:23:11.663556] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:48.078 [2024-11-17 23:23:11.664820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.078 [2024-11-17 23:23:11.664843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:48.078 [2024-11-17 23:23:11.664850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.269 ms 00:17:48.078 [2024-11-17 23:23:11.664857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.078 [2024-11-17 23:23:11.664894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.078 [2024-11-17 23:23:11.664902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:48.078 [2024-11-17 23:23:11.664908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:17:48.078 [2024-11-17 23:23:11.664917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.078 [2024-11-17 23:23:11.664930] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:48.078 [2024-11-17 23:23:11.665042] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:48.078 [2024-11-17 23:23:11.665051] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:48.078 [2024-11-17 23:23:11.665061] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:48.078 [2024-11-17 23:23:11.665069] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:48.078 [2024-11-17 23:23:11.665085] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:48.078 [2024-11-17 23:23:11.665093] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:48.078 [2024-11-17 23:23:11.665100] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:48.078 [2024-11-17 23:23:11.665108] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:48.078 [2024-11-17 23:23:11.665114] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:48.078 [2024-11-17 23:23:11.665120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.078 [2024-11-17 23:23:11.665127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:48.078 [2024-11-17 23:23:11.665133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:17:48.078 [2024-11-17 23:23:11.665140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.078 [2024-11-17 23:23:11.665203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.078 [2024-11-17 23:23:11.665212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:48.078 [2024-11-17 23:23:11.665218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:48.078 [2024-11-17 23:23:11.665225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.078 [2024-11-17 23:23:11.665298] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:48.078 [2024-11-17 23:23:11.665308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:48.078 [2024-11-17 23:23:11.665314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:48.078 [2024-11-17 23:23:11.665321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:48.078 [2024-11-17 23:23:11.665327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:48.078 [2024-11-17 23:23:11.665333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:48.078 [2024-11-17 23:23:11.665338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:48.078 [2024-11-17 23:23:11.665345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:48.078 [2024-11-17 23:23:11.665351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:48.078 [2024-11-17 23:23:11.665357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:48.078 [2024-11-17 23:23:11.665362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:48.078 [2024-11-17 23:23:11.665369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:48.078 [2024-11-17 23:23:11.665374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:48.078 [2024-11-17 23:23:11.665382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:48.078 [2024-11-17 23:23:11.665387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:48.078 [2024-11-17 23:23:11.665394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:48.078 [2024-11-17 23:23:11.665399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:48.078 [2024-11-17 23:23:11.665405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:48.078 [2024-11-17 23:23:11.665410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:48.078 [2024-11-17 23:23:11.665416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:48.078 [2024-11-17 23:23:11.665422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:48.078 [2024-11-17 23:23:11.665429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:48.078 [2024-11-17 23:23:11.665434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:48.078 [2024-11-17 23:23:11.665440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:48.078 [2024-11-17 23:23:11.665445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:48.078 [2024-11-17 23:23:11.665452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:48.078 [2024-11-17 23:23:11.665457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:48.078 [2024-11-17 23:23:11.665463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:48.078 [2024-11-17 23:23:11.665469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:48.078 [2024-11-17 23:23:11.665477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:48.078 [2024-11-17 23:23:11.665483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:48.078 [2024-11-17 23:23:11.665490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:48.078 [2024-11-17 23:23:11.665496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:48.078 [2024-11-17 23:23:11.665504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:48.079 [2024-11-17 23:23:11.665510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:48.079 [2024-11-17 23:23:11.665517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:48.079 [2024-11-17 23:23:11.665523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:48.079 [2024-11-17 23:23:11.665530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:48.079 [2024-11-17 23:23:11.665535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:48.079 [2024-11-17 23:23:11.665542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:48.079 [2024-11-17 23:23:11.665548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:48.079 [2024-11-17 23:23:11.665555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:48.079 [2024-11-17 23:23:11.665560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:48.079 [2024-11-17 23:23:11.665567] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:48.079 [2024-11-17 23:23:11.665573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:48.079 [2024-11-17 23:23:11.665582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:48.079 [2024-11-17 23:23:11.665591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:48.079 [2024-11-17 23:23:11.665600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:48.079 [2024-11-17 23:23:11.665606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:48.079 [2024-11-17 23:23:11.665612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:48.079 [2024-11-17 23:23:11.665618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:48.079 [2024-11-17 23:23:11.665625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:48.079 [2024-11-17 23:23:11.665632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:48.079 [2024-11-17 23:23:11.665641] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:48.079 [2024-11-17 23:23:11.665651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:48.079 [2024-11-17 23:23:11.665661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:48.079 [2024-11-17 23:23:11.665667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:48.079 [2024-11-17 23:23:11.665674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:48.079 [2024-11-17 23:23:11.665681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:48.079 [2024-11-17 23:23:11.665688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:48.079 [2024-11-17 23:23:11.665695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:48.079 [2024-11-17 23:23:11.665704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:48.079 [2024-11-17 23:23:11.665710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:48.079 [2024-11-17 23:23:11.665717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:48.079 [2024-11-17 23:23:11.665724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:48.079 [2024-11-17 23:23:11.665731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:48.079 [2024-11-17 23:23:11.665737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:48.079 [2024-11-17 23:23:11.665745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:48.079 [2024-11-17 23:23:11.665751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:48.079 [2024-11-17 23:23:11.665758] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:48.079 [2024-11-17 23:23:11.665765] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:48.079 [2024-11-17 23:23:11.665774] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:48.079 [2024-11-17 23:23:11.665780] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:48.079 [2024-11-17 23:23:11.665787] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:48.079 [2024-11-17 23:23:11.665794] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:48.079 [2024-11-17 23:23:11.665802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.079 [2024-11-17 23:23:11.665808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:48.079 [2024-11-17 23:23:11.665817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:17:48.079 [2024-11-17 23:23:11.665823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.079 [2024-11-17 23:23:11.665854] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:48.079 [2024-11-17 23:23:11.665860] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:52.299 [2024-11-17 23:23:15.265465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.299 [2024-11-17 23:23:15.265554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:52.299 [2024-11-17 23:23:15.265574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3599.584 ms 00:17:52.299 [2024-11-17 23:23:15.265584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.299 [2024-11-17 23:23:15.279912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.299 [2024-11-17 23:23:15.279960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:52.299 [2024-11-17 23:23:15.279976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.196 ms 00:17:52.299 [2024-11-17 23:23:15.279985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.299 [2024-11-17 23:23:15.280115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.299 [2024-11-17 23:23:15.280126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:52.299 [2024-11-17 23:23:15.280144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:17:52.299 [2024-11-17 23:23:15.280152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.299 [2024-11-17 23:23:15.292294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.299 [2024-11-17 23:23:15.292339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:52.299 [2024-11-17 23:23:15.292353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.097 ms 00:17:52.299 [2024-11-17 23:23:15.292364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.299 [2024-11-17 23:23:15.292399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.299 [2024-11-17 23:23:15.292407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:52.299 [2024-11-17 23:23:15.292418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:52.299 [2024-11-17 23:23:15.292426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.299 [2024-11-17 23:23:15.292937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.299 [2024-11-17 23:23:15.292960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:52.299 [2024-11-17 23:23:15.292973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.456 ms 00:17:52.299 [2024-11-17 23:23:15.292982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.299 [2024-11-17 23:23:15.293117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.299 [2024-11-17 23:23:15.293127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:52.299 [2024-11-17 23:23:15.293139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:17:52.299 [2024-11-17 23:23:15.293152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.299 [2024-11-17 23:23:15.301338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.299 [2024-11-17 23:23:15.301376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:52.299 [2024-11-17 23:23:15.301389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.156 ms 00:17:52.299 [2024-11-17 23:23:15.301397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.299 [2024-11-17 23:23:15.310990] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:52.299 [2024-11-17 23:23:15.314496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.299 [2024-11-17 23:23:15.314534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:52.299 [2024-11-17 23:23:15.314544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.025 ms 00:17:52.299 [2024-11-17 23:23:15.314554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.299 [2024-11-17 23:23:15.404715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.299 [2024-11-17 23:23:15.404790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:52.299 [2024-11-17 23:23:15.404809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.130 ms 00:17:52.299 [2024-11-17 23:23:15.404823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.299 [2024-11-17 23:23:15.405069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.299 [2024-11-17 23:23:15.405086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:52.299 [2024-11-17 23:23:15.405106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:17:52.299 [2024-11-17 23:23:15.405117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.299 [2024-11-17 23:23:15.411941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.299 [2024-11-17 23:23:15.411999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:52.299 [2024-11-17 23:23:15.412019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.767 ms 00:17:52.299 [2024-11-17 23:23:15.412031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.299 [2024-11-17 23:23:15.417954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.300 [2024-11-17 23:23:15.418007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:52.300 [2024-11-17 23:23:15.418018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.869 ms 00:17:52.300 [2024-11-17 23:23:15.418028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.300 [2024-11-17 23:23:15.418548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.300 [2024-11-17 23:23:15.418571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:52.300 [2024-11-17 23:23:15.418582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.469 ms 00:17:52.300 [2024-11-17 23:23:15.418595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.300 [2024-11-17 23:23:15.460473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.300 [2024-11-17 23:23:15.460532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:52.300 [2024-11-17 23:23:15.460548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.833 ms 00:17:52.300 [2024-11-17 23:23:15.460559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.300 [2024-11-17 23:23:15.468356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.300 [2024-11-17 23:23:15.468411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:52.300 [2024-11-17 23:23:15.468429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.728 ms 00:17:52.300 [2024-11-17 23:23:15.468440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.300 [2024-11-17 23:23:15.475242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.300 [2024-11-17 23:23:15.475298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:52.300 [2024-11-17 23:23:15.475308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.748 ms 00:17:52.300 [2024-11-17 23:23:15.475319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.300 [2024-11-17 23:23:15.481960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.300 [2024-11-17 23:23:15.482009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:52.300 [2024-11-17 23:23:15.482019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.589 ms 00:17:52.300 [2024-11-17 23:23:15.482032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.300 [2024-11-17 23:23:15.482085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.300 [2024-11-17 23:23:15.482098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:52.300 [2024-11-17 23:23:15.482107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:52.300 [2024-11-17 23:23:15.482117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.300 [2024-11-17 23:23:15.482208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.300 [2024-11-17 23:23:15.482256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:52.300 [2024-11-17 23:23:15.482265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:17:52.300 [2024-11-17 23:23:15.482278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.300 [2024-11-17 23:23:15.483379] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3825.739 ms, result 0 00:17:52.300 { 00:17:52.300 "name": "ftl0", 00:17:52.300 "uuid": "e1a16e31-7428-4c79-b749-82ad6e64cef8" 00:17:52.300 } 00:17:52.300 23:23:15 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:17:52.300 23:23:15 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:52.300 23:23:15 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:17:52.300 23:23:15 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:52.300 [2024-11-17 23:23:15.930797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.300 [2024-11-17 23:23:15.930858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:52.300 [2024-11-17 23:23:15.930890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:52.300 [2024-11-17 23:23:15.930900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.300 [2024-11-17 23:23:15.930928] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:52.300 [2024-11-17 23:23:15.931741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.300 [2024-11-17 23:23:15.931795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:52.300 [2024-11-17 23:23:15.931807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.795 ms 00:17:52.300 [2024-11-17 23:23:15.931821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.300 [2024-11-17 23:23:15.932113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.300 [2024-11-17 23:23:15.932134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:52.300 [2024-11-17 23:23:15.932144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:17:52.300 [2024-11-17 23:23:15.932158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.300 [2024-11-17 23:23:15.935393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.300 [2024-11-17 23:23:15.935413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:52.300 [2024-11-17 23:23:15.935424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.218 ms 00:17:52.300 [2024-11-17 23:23:15.935435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.300 [2024-11-17 23:23:15.941668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.300 [2024-11-17 23:23:15.941712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:52.300 [2024-11-17 23:23:15.941724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.215 ms 00:17:52.300 [2024-11-17 23:23:15.941738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.300 [2024-11-17 23:23:15.945022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.300 [2024-11-17 23:23:15.945086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:52.300 [2024-11-17 23:23:15.945097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.167 ms 00:17:52.300 [2024-11-17 23:23:15.945107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.300 [2024-11-17 23:23:15.951469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.300 [2024-11-17 23:23:15.951528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:52.300 [2024-11-17 23:23:15.951540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.310 ms 00:17:52.300 [2024-11-17 23:23:15.951550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.300 [2024-11-17 23:23:15.951705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.300 [2024-11-17 23:23:15.951719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:52.300 [2024-11-17 23:23:15.951731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:17:52.300 [2024-11-17 23:23:15.951742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.300 [2024-11-17 23:23:15.955358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.300 [2024-11-17 23:23:15.955412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:52.300 [2024-11-17 23:23:15.955423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.596 ms 00:17:52.300 [2024-11-17 23:23:15.955433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.300 [2024-11-17 23:23:15.958559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.300 [2024-11-17 23:23:15.958616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:52.300 [2024-11-17 23:23:15.958626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.074 ms 00:17:52.300 [2024-11-17 23:23:15.958636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.300 [2024-11-17 23:23:15.961200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.300 [2024-11-17 23:23:15.961254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:52.300 [2024-11-17 23:23:15.961273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.517 ms 00:17:52.300 [2024-11-17 23:23:15.961283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.300 [2024-11-17 23:23:15.963493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.300 [2024-11-17 23:23:15.963550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:52.300 [2024-11-17 23:23:15.963560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.133 ms 00:17:52.300 [2024-11-17 23:23:15.963570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.300 [2024-11-17 23:23:15.963618] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:52.300 [2024-11-17 23:23:15.963637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:52.300 [2024-11-17 23:23:15.963648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:52.300 [2024-11-17 23:23:15.963659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:52.300 [2024-11-17 23:23:15.963667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:52.300 [2024-11-17 23:23:15.963681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:52.300 [2024-11-17 23:23:15.963706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:52.300 [2024-11-17 23:23:15.963716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:52.300 [2024-11-17 23:23:15.963724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:52.300 [2024-11-17 23:23:15.963734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:52.300 [2024-11-17 23:23:15.963742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:52.300 [2024-11-17 23:23:15.963753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:52.300 [2024-11-17 23:23:15.963762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.963772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.963780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.963791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.963799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.963808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.963816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.963825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.963833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.963846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.963853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.963864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.963872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.963899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.963907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.963916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.963925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.963937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.963946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.963956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.963964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.963974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.963982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.963992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.963999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:52.301 [2024-11-17 23:23:15.964415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:52.302 [2024-11-17 23:23:15.964423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:52.302 [2024-11-17 23:23:15.964433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:52.302 [2024-11-17 23:23:15.964441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:52.302 [2024-11-17 23:23:15.964453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:52.302 [2024-11-17 23:23:15.964460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:52.302 [2024-11-17 23:23:15.964470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:52.302 [2024-11-17 23:23:15.964477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:52.302 [2024-11-17 23:23:15.964486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:52.302 [2024-11-17 23:23:15.964494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:52.302 [2024-11-17 23:23:15.964503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:52.302 [2024-11-17 23:23:15.964511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:52.302 [2024-11-17 23:23:15.964521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:52.302 [2024-11-17 23:23:15.964529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:52.302 [2024-11-17 23:23:15.964540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:52.302 [2024-11-17 23:23:15.964548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:52.302 [2024-11-17 23:23:15.964559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:52.302 [2024-11-17 23:23:15.964567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:52.302 [2024-11-17 23:23:15.964577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:52.302 [2024-11-17 23:23:15.964585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:52.302 [2024-11-17 23:23:15.964606] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:52.302 [2024-11-17 23:23:15.964620] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e1a16e31-7428-4c79-b749-82ad6e64cef8 00:17:52.302 [2024-11-17 23:23:15.964633] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:52.302 [2024-11-17 23:23:15.964642] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:52.302 [2024-11-17 23:23:15.964652] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:52.302 [2024-11-17 23:23:15.964660] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:52.302 [2024-11-17 23:23:15.964670] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:52.302 [2024-11-17 23:23:15.964685] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:52.302 [2024-11-17 23:23:15.964695] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:52.302 [2024-11-17 23:23:15.964701] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:52.302 [2024-11-17 23:23:15.964710] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:52.302 [2024-11-17 23:23:15.964717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.302 [2024-11-17 23:23:15.964728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:52.302 [2024-11-17 23:23:15.964737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.101 ms 00:17:52.302 [2024-11-17 23:23:15.964746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.302 [2024-11-17 23:23:15.967147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.302 [2024-11-17 23:23:15.967182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:52.302 [2024-11-17 23:23:15.967192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.376 ms 00:17:52.302 [2024-11-17 23:23:15.967205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.302 [2024-11-17 23:23:15.967323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.302 [2024-11-17 23:23:15.967335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:52.302 [2024-11-17 23:23:15.967351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:17:52.302 [2024-11-17 23:23:15.967360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.302 [2024-11-17 23:23:15.975977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.302 [2024-11-17 23:23:15.976031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:52.302 [2024-11-17 23:23:15.976045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.302 [2024-11-17 23:23:15.976132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.302 [2024-11-17 23:23:15.976203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.302 [2024-11-17 23:23:15.976214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:52.302 [2024-11-17 23:23:15.976222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.302 [2024-11-17 23:23:15.976233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.302 [2024-11-17 23:23:15.976321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.302 [2024-11-17 23:23:15.976337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:52.302 [2024-11-17 23:23:15.976346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.302 [2024-11-17 23:23:15.976358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.302 [2024-11-17 23:23:15.976378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.302 [2024-11-17 23:23:15.976389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:52.302 [2024-11-17 23:23:15.976400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.302 [2024-11-17 23:23:15.976410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.302 [2024-11-17 23:23:15.990675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.302 [2024-11-17 23:23:15.990734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:52.302 [2024-11-17 23:23:15.990746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.302 [2024-11-17 23:23:15.990759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.302 [2024-11-17 23:23:16.001369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.302 [2024-11-17 23:23:16.001421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:52.302 [2024-11-17 23:23:16.001437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.302 [2024-11-17 23:23:16.001447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.302 [2024-11-17 23:23:16.001523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.302 [2024-11-17 23:23:16.001539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:52.302 [2024-11-17 23:23:16.001548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.302 [2024-11-17 23:23:16.001557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.302 [2024-11-17 23:23:16.001609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.302 [2024-11-17 23:23:16.001620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:52.302 [2024-11-17 23:23:16.001629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.302 [2024-11-17 23:23:16.001639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.302 [2024-11-17 23:23:16.001717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.302 [2024-11-17 23:23:16.001729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:52.302 [2024-11-17 23:23:16.001738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.302 [2024-11-17 23:23:16.001748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.302 [2024-11-17 23:23:16.001788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.302 [2024-11-17 23:23:16.001803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:52.302 [2024-11-17 23:23:16.001812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.302 [2024-11-17 23:23:16.001821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.302 [2024-11-17 23:23:16.001862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.302 [2024-11-17 23:23:16.001875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:52.302 [2024-11-17 23:23:16.001919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.302 [2024-11-17 23:23:16.001929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.302 [2024-11-17 23:23:16.001979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.302 [2024-11-17 23:23:16.001992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:52.302 [2024-11-17 23:23:16.002002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.303 [2024-11-17 23:23:16.002013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.303 [2024-11-17 23:23:16.002160] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 71.326 ms, result 0 00:17:52.303 true 00:17:52.303 23:23:16 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 85558 00:17:52.303 23:23:16 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 85558 ']' 00:17:52.303 23:23:16 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 85558 00:17:52.303 23:23:16 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:17:52.303 23:23:16 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:52.303 23:23:16 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85558 00:17:52.303 23:23:16 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:52.303 23:23:16 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:52.303 killing process with pid 85558 00:17:52.303 23:23:16 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85558' 00:17:52.303 23:23:16 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 85558 00:17:52.303 23:23:16 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 85558 00:17:57.596 23:23:21 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:01.799 262144+0 records in 00:18:01.799 262144+0 records out 00:18:01.799 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.1395 s, 259 MB/s 00:18:01.799 23:23:25 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:03.713 23:23:27 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:03.713 [2024-11-17 23:23:27.249630] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:18:03.714 [2024-11-17 23:23:27.249725] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85787 ] 00:18:03.714 [2024-11-17 23:23:27.391544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.714 [2024-11-17 23:23:27.411368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.714 [2024-11-17 23:23:27.501197] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:03.714 [2024-11-17 23:23:27.501253] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:03.976 [2024-11-17 23:23:27.657992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.976 [2024-11-17 23:23:27.658041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:03.976 [2024-11-17 23:23:27.658057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:03.976 [2024-11-17 23:23:27.658065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.976 [2024-11-17 23:23:27.658113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.976 [2024-11-17 23:23:27.658122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:03.976 [2024-11-17 23:23:27.658131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:18:03.976 [2024-11-17 23:23:27.658138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.976 [2024-11-17 23:23:27.658158] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:03.976 [2024-11-17 23:23:27.658465] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:03.976 [2024-11-17 23:23:27.658498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.976 [2024-11-17 23:23:27.658506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:03.976 [2024-11-17 23:23:27.658514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:18:03.976 [2024-11-17 23:23:27.658524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.976 [2024-11-17 23:23:27.659682] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:03.976 [2024-11-17 23:23:27.662402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.976 [2024-11-17 23:23:27.662443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:03.976 [2024-11-17 23:23:27.662453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.723 ms 00:18:03.976 [2024-11-17 23:23:27.662462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.976 [2024-11-17 23:23:27.662527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.976 [2024-11-17 23:23:27.662539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:03.976 [2024-11-17 23:23:27.662549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:18:03.976 [2024-11-17 23:23:27.662557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.976 [2024-11-17 23:23:27.667850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.976 [2024-11-17 23:23:27.667890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:03.976 [2024-11-17 23:23:27.667904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.234 ms 00:18:03.976 [2024-11-17 23:23:27.667912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.976 [2024-11-17 23:23:27.667996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.976 [2024-11-17 23:23:27.668006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:03.976 [2024-11-17 23:23:27.668014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:03.976 [2024-11-17 23:23:27.668026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.976 [2024-11-17 23:23:27.668063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.976 [2024-11-17 23:23:27.668072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:03.976 [2024-11-17 23:23:27.668085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:03.976 [2024-11-17 23:23:27.668094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.976 [2024-11-17 23:23:27.668117] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:03.976 [2024-11-17 23:23:27.669522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.976 [2024-11-17 23:23:27.669550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:03.976 [2024-11-17 23:23:27.669559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.409 ms 00:18:03.976 [2024-11-17 23:23:27.669566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.976 [2024-11-17 23:23:27.669593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.976 [2024-11-17 23:23:27.669601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:03.976 [2024-11-17 23:23:27.669609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:03.976 [2024-11-17 23:23:27.669623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.976 [2024-11-17 23:23:27.669641] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:03.976 [2024-11-17 23:23:27.669660] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:03.976 [2024-11-17 23:23:27.669700] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:03.976 [2024-11-17 23:23:27.669718] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:03.976 [2024-11-17 23:23:27.669820] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:03.976 [2024-11-17 23:23:27.669830] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:03.976 [2024-11-17 23:23:27.669842] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:03.976 [2024-11-17 23:23:27.669852] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:03.976 [2024-11-17 23:23:27.669860] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:03.976 [2024-11-17 23:23:27.669868] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:03.976 [2024-11-17 23:23:27.669876] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:03.976 [2024-11-17 23:23:27.669903] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:03.976 [2024-11-17 23:23:27.669910] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:03.976 [2024-11-17 23:23:27.669918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.976 [2024-11-17 23:23:27.669928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:03.976 [2024-11-17 23:23:27.669936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:18:03.976 [2024-11-17 23:23:27.669942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.976 [2024-11-17 23:23:27.670028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.976 [2024-11-17 23:23:27.670063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:03.976 [2024-11-17 23:23:27.670071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:18:03.976 [2024-11-17 23:23:27.670078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.976 [2024-11-17 23:23:27.670173] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:03.976 [2024-11-17 23:23:27.670188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:03.976 [2024-11-17 23:23:27.670197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:03.976 [2024-11-17 23:23:27.670205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:03.976 [2024-11-17 23:23:27.670218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:03.976 [2024-11-17 23:23:27.670225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:03.976 [2024-11-17 23:23:27.670233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:03.976 [2024-11-17 23:23:27.670241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:03.976 [2024-11-17 23:23:27.670248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:03.976 [2024-11-17 23:23:27.670256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:03.976 [2024-11-17 23:23:27.670264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:03.976 [2024-11-17 23:23:27.670274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:03.976 [2024-11-17 23:23:27.670282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:03.976 [2024-11-17 23:23:27.670290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:03.976 [2024-11-17 23:23:27.670297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:03.976 [2024-11-17 23:23:27.670305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:03.976 [2024-11-17 23:23:27.670312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:03.976 [2024-11-17 23:23:27.670320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:03.976 [2024-11-17 23:23:27.670327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:03.976 [2024-11-17 23:23:27.670335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:03.976 [2024-11-17 23:23:27.670343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:03.976 [2024-11-17 23:23:27.670350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:03.976 [2024-11-17 23:23:27.670357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:03.976 [2024-11-17 23:23:27.670364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:03.976 [2024-11-17 23:23:27.670373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:03.976 [2024-11-17 23:23:27.670380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:03.976 [2024-11-17 23:23:27.670388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:03.976 [2024-11-17 23:23:27.670400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:03.976 [2024-11-17 23:23:27.670407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:03.976 [2024-11-17 23:23:27.670415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:03.976 [2024-11-17 23:23:27.670422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:03.976 [2024-11-17 23:23:27.670429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:03.976 [2024-11-17 23:23:27.670437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:03.976 [2024-11-17 23:23:27.670444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:03.976 [2024-11-17 23:23:27.670452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:03.976 [2024-11-17 23:23:27.670459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:03.976 [2024-11-17 23:23:27.670466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:03.976 [2024-11-17 23:23:27.670474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:03.976 [2024-11-17 23:23:27.670481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:03.976 [2024-11-17 23:23:27.670488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:03.976 [2024-11-17 23:23:27.670495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:03.976 [2024-11-17 23:23:27.670503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:03.976 [2024-11-17 23:23:27.670510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:03.976 [2024-11-17 23:23:27.670520] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:03.976 [2024-11-17 23:23:27.670531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:03.976 [2024-11-17 23:23:27.670539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:03.976 [2024-11-17 23:23:27.670546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:03.976 [2024-11-17 23:23:27.670555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:03.976 [2024-11-17 23:23:27.670562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:03.976 [2024-11-17 23:23:27.670570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:03.976 [2024-11-17 23:23:27.670577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:03.976 [2024-11-17 23:23:27.670585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:03.976 [2024-11-17 23:23:27.670592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:03.976 [2024-11-17 23:23:27.670601] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:03.976 [2024-11-17 23:23:27.670611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:03.976 [2024-11-17 23:23:27.670620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:03.976 [2024-11-17 23:23:27.670628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:03.976 [2024-11-17 23:23:27.670636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:03.976 [2024-11-17 23:23:27.670644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:03.976 [2024-11-17 23:23:27.670654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:03.976 [2024-11-17 23:23:27.670662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:03.976 [2024-11-17 23:23:27.670670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:03.976 [2024-11-17 23:23:27.670678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:03.976 [2024-11-17 23:23:27.670686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:03.976 [2024-11-17 23:23:27.670694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:03.976 [2024-11-17 23:23:27.670702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:03.976 [2024-11-17 23:23:27.670713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:03.976 [2024-11-17 23:23:27.670720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:03.976 [2024-11-17 23:23:27.670727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:03.976 [2024-11-17 23:23:27.670734] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:03.976 [2024-11-17 23:23:27.670745] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:03.976 [2024-11-17 23:23:27.670753] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:03.976 [2024-11-17 23:23:27.670760] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:03.976 [2024-11-17 23:23:27.670768] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:03.976 [2024-11-17 23:23:27.670775] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:03.976 [2024-11-17 23:23:27.670785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.976 [2024-11-17 23:23:27.670793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:03.976 [2024-11-17 23:23:27.670800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:18:03.976 [2024-11-17 23:23:27.670810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.976 [2024-11-17 23:23:27.680268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.976 [2024-11-17 23:23:27.680303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:03.976 [2024-11-17 23:23:27.680312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.419 ms 00:18:03.976 [2024-11-17 23:23:27.680320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.976 [2024-11-17 23:23:27.680401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.976 [2024-11-17 23:23:27.680409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:03.976 [2024-11-17 23:23:27.680417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:18:03.976 [2024-11-17 23:23:27.680424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.976 [2024-11-17 23:23:27.698548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.976 [2024-11-17 23:23:27.698592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:03.976 [2024-11-17 23:23:27.698603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.075 ms 00:18:03.976 [2024-11-17 23:23:27.698611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.976 [2024-11-17 23:23:27.698656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.976 [2024-11-17 23:23:27.698665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:03.976 [2024-11-17 23:23:27.698674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:03.976 [2024-11-17 23:23:27.698681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.976 [2024-11-17 23:23:27.699099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.976 [2024-11-17 23:23:27.699129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:03.976 [2024-11-17 23:23:27.699138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:18:03.976 [2024-11-17 23:23:27.699145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.976 [2024-11-17 23:23:27.699276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.976 [2024-11-17 23:23:27.699286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:03.976 [2024-11-17 23:23:27.699295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:18:03.976 [2024-11-17 23:23:27.699306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.976 [2024-11-17 23:23:27.705445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.976 [2024-11-17 23:23:27.705484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:03.976 [2024-11-17 23:23:27.705497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.115 ms 00:18:03.976 [2024-11-17 23:23:27.705507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.976 [2024-11-17 23:23:27.708602] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:03.976 [2024-11-17 23:23:27.708650] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:03.976 [2024-11-17 23:23:27.708664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.976 [2024-11-17 23:23:27.708675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:03.976 [2024-11-17 23:23:27.708686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.048 ms 00:18:03.976 [2024-11-17 23:23:27.708696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.976 [2024-11-17 23:23:27.723764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.976 [2024-11-17 23:23:27.723807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:03.976 [2024-11-17 23:23:27.723817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.021 ms 00:18:03.976 [2024-11-17 23:23:27.723825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.976 [2024-11-17 23:23:27.725893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.976 [2024-11-17 23:23:27.725923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:03.976 [2024-11-17 23:23:27.725932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.016 ms 00:18:03.977 [2024-11-17 23:23:27.725939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.977 [2024-11-17 23:23:27.727840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.977 [2024-11-17 23:23:27.727872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:03.977 [2024-11-17 23:23:27.727893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.868 ms 00:18:03.977 [2024-11-17 23:23:27.727900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.977 [2024-11-17 23:23:27.728225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.977 [2024-11-17 23:23:27.728243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:03.977 [2024-11-17 23:23:27.728253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:18:03.977 [2024-11-17 23:23:27.728260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.977 [2024-11-17 23:23:27.745383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.977 [2024-11-17 23:23:27.745440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:03.977 [2024-11-17 23:23:27.745451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.106 ms 00:18:03.977 [2024-11-17 23:23:27.745460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.977 [2024-11-17 23:23:27.753068] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:03.977 [2024-11-17 23:23:27.755487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.977 [2024-11-17 23:23:27.755518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:03.977 [2024-11-17 23:23:27.755533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.985 ms 00:18:03.977 [2024-11-17 23:23:27.755541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.977 [2024-11-17 23:23:27.755598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.977 [2024-11-17 23:23:27.755613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:03.977 [2024-11-17 23:23:27.755622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:03.977 [2024-11-17 23:23:27.755630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.977 [2024-11-17 23:23:27.755726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.977 [2024-11-17 23:23:27.755743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:03.977 [2024-11-17 23:23:27.755752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:03.977 [2024-11-17 23:23:27.755767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.977 [2024-11-17 23:23:27.755793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.977 [2024-11-17 23:23:27.755803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:03.977 [2024-11-17 23:23:27.755811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:03.977 [2024-11-17 23:23:27.755818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.977 [2024-11-17 23:23:27.755846] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:03.977 [2024-11-17 23:23:27.755856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.977 [2024-11-17 23:23:27.755865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:03.977 [2024-11-17 23:23:27.755873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:03.977 [2024-11-17 23:23:27.755894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.977 [2024-11-17 23:23:27.760216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.977 [2024-11-17 23:23:27.760251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:03.977 [2024-11-17 23:23:27.760261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.298 ms 00:18:03.977 [2024-11-17 23:23:27.760269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.977 [2024-11-17 23:23:27.760346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.977 [2024-11-17 23:23:27.760355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:03.977 [2024-11-17 23:23:27.760364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:03.977 [2024-11-17 23:23:27.760371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.977 [2024-11-17 23:23:27.761409] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 102.989 ms, result 0 00:18:05.360  [2024-11-17T23:23:30.125Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-17T23:23:31.068Z] Copying: 41/1024 [MB] (22 MBps) [2024-11-17T23:23:32.010Z] Copying: 63/1024 [MB] (22 MBps) [2024-11-17T23:23:33.044Z] Copying: 81/1024 [MB] (18 MBps) [2024-11-17T23:23:33.987Z] Copying: 101/1024 [MB] (19 MBps) [2024-11-17T23:23:34.931Z] Copying: 120/1024 [MB] (18 MBps) [2024-11-17T23:23:35.874Z] Copying: 136/1024 [MB] (16 MBps) [2024-11-17T23:23:36.818Z] Copying: 157/1024 [MB] (20 MBps) [2024-11-17T23:23:38.205Z] Copying: 170/1024 [MB] (13 MBps) [2024-11-17T23:23:38.781Z] Copying: 182/1024 [MB] (12 MBps) [2024-11-17T23:23:40.173Z] Copying: 202/1024 [MB] (19 MBps) [2024-11-17T23:23:41.116Z] Copying: 219/1024 [MB] (17 MBps) [2024-11-17T23:23:42.062Z] Copying: 238/1024 [MB] (18 MBps) [2024-11-17T23:23:43.006Z] Copying: 256/1024 [MB] (17 MBps) [2024-11-17T23:23:43.952Z] Copying: 267/1024 [MB] (11 MBps) [2024-11-17T23:23:44.901Z] Copying: 282/1024 [MB] (14 MBps) [2024-11-17T23:23:45.848Z] Copying: 296/1024 [MB] (14 MBps) [2024-11-17T23:23:46.794Z] Copying: 313508/1048576 [kB] (10124 kBps) [2024-11-17T23:23:48.181Z] Copying: 323572/1048576 [kB] (10064 kBps) [2024-11-17T23:23:49.122Z] Copying: 328/1024 [MB] (12 MBps) [2024-11-17T23:23:50.086Z] Copying: 343/1024 [MB] (14 MBps) [2024-11-17T23:23:51.029Z] Copying: 355/1024 [MB] (11 MBps) [2024-11-17T23:23:51.974Z] Copying: 386/1024 [MB] (31 MBps) [2024-11-17T23:23:52.919Z] Copying: 396/1024 [MB] (10 MBps) [2024-11-17T23:23:53.862Z] Copying: 414/1024 [MB] (18 MBps) [2024-11-17T23:23:54.808Z] Copying: 430/1024 [MB] (15 MBps) [2024-11-17T23:23:56.182Z] Copying: 446/1024 [MB] (15 MBps) [2024-11-17T23:23:57.118Z] Copying: 468/1024 [MB] (22 MBps) [2024-11-17T23:23:58.053Z] Copying: 492/1024 [MB] (23 MBps) [2024-11-17T23:23:58.996Z] Copying: 514/1024 [MB] (21 MBps) [2024-11-17T23:23:59.931Z] Copying: 535/1024 [MB] (21 MBps) [2024-11-17T23:24:00.876Z] Copying: 561/1024 [MB] (25 MBps) [2024-11-17T23:24:01.819Z] Copying: 572/1024 [MB] (10 MBps) [2024-11-17T23:24:03.207Z] Copying: 582/1024 [MB] (10 MBps) [2024-11-17T23:24:03.781Z] Copying: 592/1024 [MB] (10 MBps) [2024-11-17T23:24:04.806Z] Copying: 603/1024 [MB] (10 MBps) [2024-11-17T23:24:06.189Z] Copying: 653/1024 [MB] (50 MBps) [2024-11-17T23:24:07.130Z] Copying: 667/1024 [MB] (14 MBps) [2024-11-17T23:24:08.074Z] Copying: 680/1024 [MB] (12 MBps) [2024-11-17T23:24:09.015Z] Copying: 721/1024 [MB] (40 MBps) [2024-11-17T23:24:09.958Z] Copying: 776/1024 [MB] (55 MBps) [2024-11-17T23:24:10.902Z] Copying: 802/1024 [MB] (26 MBps) [2024-11-17T23:24:11.844Z] Copying: 828/1024 [MB] (25 MBps) [2024-11-17T23:24:12.786Z] Copying: 881/1024 [MB] (53 MBps) [2024-11-17T23:24:14.172Z] Copying: 935/1024 [MB] (54 MBps) [2024-11-17T23:24:14.432Z] Copying: 989/1024 [MB] (53 MBps) [2024-11-17T23:24:14.432Z] Copying: 1024/1024 [MB] (average 21 MBps)[2024-11-17 23:24:14.413483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.611 [2024-11-17 23:24:14.413517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:50.611 [2024-11-17 23:24:14.413529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:50.611 [2024-11-17 23:24:14.413535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.611 [2024-11-17 23:24:14.413556] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:50.611 [2024-11-17 23:24:14.413956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.611 [2024-11-17 23:24:14.413977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:50.611 [2024-11-17 23:24:14.413984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.389 ms 00:18:50.611 [2024-11-17 23:24:14.413996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.611 [2024-11-17 23:24:14.415262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.611 [2024-11-17 23:24:14.415293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:50.611 [2024-11-17 23:24:14.415300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.250 ms 00:18:50.611 [2024-11-17 23:24:14.415309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.611 [2024-11-17 23:24:14.426278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.611 [2024-11-17 23:24:14.426309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:50.611 [2024-11-17 23:24:14.426317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.954 ms 00:18:50.611 [2024-11-17 23:24:14.426323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.873 [2024-11-17 23:24:14.431152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.873 [2024-11-17 23:24:14.431176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:50.873 [2024-11-17 23:24:14.431184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.807 ms 00:18:50.873 [2024-11-17 23:24:14.431190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.873 [2024-11-17 23:24:14.432093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.873 [2024-11-17 23:24:14.432122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:50.873 [2024-11-17 23:24:14.432129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.867 ms 00:18:50.873 [2024-11-17 23:24:14.432135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.873 [2024-11-17 23:24:14.435290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.873 [2024-11-17 23:24:14.435317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:50.873 [2024-11-17 23:24:14.435325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.133 ms 00:18:50.873 [2024-11-17 23:24:14.435331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.873 [2024-11-17 23:24:14.435413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.873 [2024-11-17 23:24:14.435420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:50.873 [2024-11-17 23:24:14.435427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:18:50.873 [2024-11-17 23:24:14.435432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.873 [2024-11-17 23:24:14.437059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.873 [2024-11-17 23:24:14.437086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:50.873 [2024-11-17 23:24:14.437092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.615 ms 00:18:50.873 [2024-11-17 23:24:14.437097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.873 [2024-11-17 23:24:14.438155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.873 [2024-11-17 23:24:14.438179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:50.873 [2024-11-17 23:24:14.438186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.037 ms 00:18:50.873 [2024-11-17 23:24:14.438191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.873 [2024-11-17 23:24:14.438892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.873 [2024-11-17 23:24:14.438915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:50.873 [2024-11-17 23:24:14.438922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.680 ms 00:18:50.873 [2024-11-17 23:24:14.438927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.873 [2024-11-17 23:24:14.439792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.873 [2024-11-17 23:24:14.439818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:50.873 [2024-11-17 23:24:14.439824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.827 ms 00:18:50.873 [2024-11-17 23:24:14.439829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.873 [2024-11-17 23:24:14.439849] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:50.873 [2024-11-17 23:24:14.439864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:50.873 [2024-11-17 23:24:14.439872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:50.873 [2024-11-17 23:24:14.439887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:50.873 [2024-11-17 23:24:14.439893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:50.873 [2024-11-17 23:24:14.439899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:50.873 [2024-11-17 23:24:14.439904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:50.873 [2024-11-17 23:24:14.439910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:50.873 [2024-11-17 23:24:14.439916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:50.873 [2024-11-17 23:24:14.439922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:50.873 [2024-11-17 23:24:14.439927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.439933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.439939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.439945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.439951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.439956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.439962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.439968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.439973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.439978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.439984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.439990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.439995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:50.874 [2024-11-17 23:24:14.440406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:50.875 [2024-11-17 23:24:14.440411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:50.875 [2024-11-17 23:24:14.440417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:50.875 [2024-11-17 23:24:14.440423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:50.875 [2024-11-17 23:24:14.440429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:50.875 [2024-11-17 23:24:14.440434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:50.875 [2024-11-17 23:24:14.440440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:50.875 [2024-11-17 23:24:14.440446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:50.875 [2024-11-17 23:24:14.440458] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:50.875 [2024-11-17 23:24:14.440464] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e1a16e31-7428-4c79-b749-82ad6e64cef8 00:18:50.875 [2024-11-17 23:24:14.440469] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:50.875 [2024-11-17 23:24:14.440475] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:50.875 [2024-11-17 23:24:14.440481] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:50.875 [2024-11-17 23:24:14.440486] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:50.875 [2024-11-17 23:24:14.440492] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:50.875 [2024-11-17 23:24:14.440498] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:50.875 [2024-11-17 23:24:14.440503] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:50.875 [2024-11-17 23:24:14.440508] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:50.875 [2024-11-17 23:24:14.440513] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:50.875 [2024-11-17 23:24:14.440518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.875 [2024-11-17 23:24:14.440524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:50.875 [2024-11-17 23:24:14.440534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.670 ms 00:18:50.875 [2024-11-17 23:24:14.440539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.875 [2024-11-17 23:24:14.441707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.875 [2024-11-17 23:24:14.441727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:50.875 [2024-11-17 23:24:14.441734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.156 ms 00:18:50.875 [2024-11-17 23:24:14.441740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.875 [2024-11-17 23:24:14.441806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.875 [2024-11-17 23:24:14.441813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:50.875 [2024-11-17 23:24:14.441820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:18:50.875 [2024-11-17 23:24:14.441828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.875 [2024-11-17 23:24:14.445813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:50.875 [2024-11-17 23:24:14.445836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:50.875 [2024-11-17 23:24:14.445843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:50.875 [2024-11-17 23:24:14.445849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.875 [2024-11-17 23:24:14.445907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:50.875 [2024-11-17 23:24:14.445915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:50.875 [2024-11-17 23:24:14.445921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:50.875 [2024-11-17 23:24:14.445930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.875 [2024-11-17 23:24:14.445962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:50.875 [2024-11-17 23:24:14.445969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:50.875 [2024-11-17 23:24:14.445975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:50.875 [2024-11-17 23:24:14.445980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.875 [2024-11-17 23:24:14.445991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:50.875 [2024-11-17 23:24:14.445997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:50.875 [2024-11-17 23:24:14.446004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:50.875 [2024-11-17 23:24:14.446010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.875 [2024-11-17 23:24:14.453308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:50.875 [2024-11-17 23:24:14.453343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:50.875 [2024-11-17 23:24:14.453351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:50.875 [2024-11-17 23:24:14.453357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.875 [2024-11-17 23:24:14.459358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:50.875 [2024-11-17 23:24:14.459394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:50.875 [2024-11-17 23:24:14.459402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:50.875 [2024-11-17 23:24:14.459408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.875 [2024-11-17 23:24:14.459444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:50.875 [2024-11-17 23:24:14.459451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:50.875 [2024-11-17 23:24:14.459462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:50.875 [2024-11-17 23:24:14.459470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.875 [2024-11-17 23:24:14.459506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:50.875 [2024-11-17 23:24:14.459513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:50.875 [2024-11-17 23:24:14.459519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:50.875 [2024-11-17 23:24:14.459527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.875 [2024-11-17 23:24:14.459580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:50.875 [2024-11-17 23:24:14.459594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:50.875 [2024-11-17 23:24:14.459600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:50.875 [2024-11-17 23:24:14.459606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.875 [2024-11-17 23:24:14.459633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:50.875 [2024-11-17 23:24:14.459640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:50.875 [2024-11-17 23:24:14.459645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:50.875 [2024-11-17 23:24:14.459653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.875 [2024-11-17 23:24:14.459680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:50.875 [2024-11-17 23:24:14.459687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:50.875 [2024-11-17 23:24:14.459693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:50.875 [2024-11-17 23:24:14.459698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.875 [2024-11-17 23:24:14.459730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:50.875 [2024-11-17 23:24:14.459738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:50.875 [2024-11-17 23:24:14.459744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:50.875 [2024-11-17 23:24:14.459754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.875 [2024-11-17 23:24:14.459858] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 46.353 ms, result 0 00:18:51.448 00:18:51.448 00:18:51.448 23:24:15 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:18:51.448 [2024-11-17 23:24:15.125820] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:18:51.448 [2024-11-17 23:24:15.125963] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86281 ] 00:18:51.710 [2024-11-17 23:24:15.268932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.710 [2024-11-17 23:24:15.285949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.710 [2024-11-17 23:24:15.366234] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:51.710 [2024-11-17 23:24:15.366286] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:51.710 [2024-11-17 23:24:15.508210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.710 [2024-11-17 23:24:15.508246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:51.710 [2024-11-17 23:24:15.508256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:51.710 [2024-11-17 23:24:15.508262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.710 [2024-11-17 23:24:15.508297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.710 [2024-11-17 23:24:15.508307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:51.710 [2024-11-17 23:24:15.508317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:18:51.710 [2024-11-17 23:24:15.508322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.710 [2024-11-17 23:24:15.508336] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:51.710 [2024-11-17 23:24:15.508543] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:51.710 [2024-11-17 23:24:15.508555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.710 [2024-11-17 23:24:15.508562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:51.710 [2024-11-17 23:24:15.508568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:18:51.710 [2024-11-17 23:24:15.508575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.710 [2024-11-17 23:24:15.509501] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:51.710 [2024-11-17 23:24:15.511432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.710 [2024-11-17 23:24:15.511461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:51.710 [2024-11-17 23:24:15.511469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.932 ms 00:18:51.710 [2024-11-17 23:24:15.511475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.710 [2024-11-17 23:24:15.511517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.710 [2024-11-17 23:24:15.511524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:51.710 [2024-11-17 23:24:15.511532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:51.710 [2024-11-17 23:24:15.511537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.710 [2024-11-17 23:24:15.515766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.710 [2024-11-17 23:24:15.515797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:51.710 [2024-11-17 23:24:15.515809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.201 ms 00:18:51.710 [2024-11-17 23:24:15.515818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.710 [2024-11-17 23:24:15.515894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.710 [2024-11-17 23:24:15.515901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:51.710 [2024-11-17 23:24:15.515908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:18:51.710 [2024-11-17 23:24:15.515913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.710 [2024-11-17 23:24:15.515951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.710 [2024-11-17 23:24:15.515958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:51.710 [2024-11-17 23:24:15.515965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:51.710 [2024-11-17 23:24:15.515970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.710 [2024-11-17 23:24:15.515988] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:51.710 [2024-11-17 23:24:15.517193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.710 [2024-11-17 23:24:15.517216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:51.710 [2024-11-17 23:24:15.517223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.208 ms 00:18:51.710 [2024-11-17 23:24:15.517229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.710 [2024-11-17 23:24:15.517256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.710 [2024-11-17 23:24:15.517267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:51.710 [2024-11-17 23:24:15.517274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:51.710 [2024-11-17 23:24:15.517280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.710 [2024-11-17 23:24:15.517298] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:51.710 [2024-11-17 23:24:15.517313] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:51.710 [2024-11-17 23:24:15.517344] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:51.710 [2024-11-17 23:24:15.517357] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:51.710 [2024-11-17 23:24:15.517438] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:51.710 [2024-11-17 23:24:15.517450] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:51.710 [2024-11-17 23:24:15.517458] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:51.710 [2024-11-17 23:24:15.517468] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:51.710 [2024-11-17 23:24:15.517474] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:51.710 [2024-11-17 23:24:15.517481] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:51.710 [2024-11-17 23:24:15.517487] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:51.710 [2024-11-17 23:24:15.517492] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:51.711 [2024-11-17 23:24:15.517497] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:51.711 [2024-11-17 23:24:15.517503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.711 [2024-11-17 23:24:15.517511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:51.711 [2024-11-17 23:24:15.517517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:18:51.711 [2024-11-17 23:24:15.517525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.711 [2024-11-17 23:24:15.517589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.711 [2024-11-17 23:24:15.517596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:51.711 [2024-11-17 23:24:15.517602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:51.711 [2024-11-17 23:24:15.517607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.711 [2024-11-17 23:24:15.517678] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:51.711 [2024-11-17 23:24:15.517692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:51.711 [2024-11-17 23:24:15.517698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:51.711 [2024-11-17 23:24:15.517704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:51.711 [2024-11-17 23:24:15.517710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:51.711 [2024-11-17 23:24:15.517715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:51.711 [2024-11-17 23:24:15.517720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:51.711 [2024-11-17 23:24:15.517725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:51.711 [2024-11-17 23:24:15.517730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:51.711 [2024-11-17 23:24:15.517735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:51.711 [2024-11-17 23:24:15.517740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:51.711 [2024-11-17 23:24:15.517745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:51.711 [2024-11-17 23:24:15.517754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:51.711 [2024-11-17 23:24:15.517760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:51.711 [2024-11-17 23:24:15.517766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:51.711 [2024-11-17 23:24:15.517771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:51.711 [2024-11-17 23:24:15.517776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:51.711 [2024-11-17 23:24:15.517781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:51.711 [2024-11-17 23:24:15.517786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:51.711 [2024-11-17 23:24:15.517791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:51.711 [2024-11-17 23:24:15.517796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:51.711 [2024-11-17 23:24:15.517801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:51.711 [2024-11-17 23:24:15.517805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:51.711 [2024-11-17 23:24:15.517810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:51.711 [2024-11-17 23:24:15.517815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:51.711 [2024-11-17 23:24:15.517820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:51.711 [2024-11-17 23:24:15.517824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:51.711 [2024-11-17 23:24:15.517829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:51.711 [2024-11-17 23:24:15.517837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:51.711 [2024-11-17 23:24:15.517842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:51.711 [2024-11-17 23:24:15.517848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:51.711 [2024-11-17 23:24:15.517853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:51.711 [2024-11-17 23:24:15.517859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:51.711 [2024-11-17 23:24:15.517864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:51.711 [2024-11-17 23:24:15.517870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:51.711 [2024-11-17 23:24:15.517875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:51.711 [2024-11-17 23:24:15.517889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:51.711 [2024-11-17 23:24:15.517896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:51.711 [2024-11-17 23:24:15.517901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:51.711 [2024-11-17 23:24:15.517907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:51.711 [2024-11-17 23:24:15.517913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:51.711 [2024-11-17 23:24:15.517918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:51.711 [2024-11-17 23:24:15.517924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:51.711 [2024-11-17 23:24:15.517930] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:51.711 [2024-11-17 23:24:15.517940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:51.711 [2024-11-17 23:24:15.517950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:51.711 [2024-11-17 23:24:15.517956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:51.711 [2024-11-17 23:24:15.517962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:51.711 [2024-11-17 23:24:15.517968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:51.711 [2024-11-17 23:24:15.517974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:51.711 [2024-11-17 23:24:15.517980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:51.711 [2024-11-17 23:24:15.517986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:51.711 [2024-11-17 23:24:15.517992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:51.711 [2024-11-17 23:24:15.517998] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:51.711 [2024-11-17 23:24:15.518006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:51.711 [2024-11-17 23:24:15.518013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:51.711 [2024-11-17 23:24:15.518019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:51.711 [2024-11-17 23:24:15.518025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:51.711 [2024-11-17 23:24:15.518031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:51.711 [2024-11-17 23:24:15.518037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:51.711 [2024-11-17 23:24:15.518044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:51.711 [2024-11-17 23:24:15.518051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:51.711 [2024-11-17 23:24:15.518057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:51.711 [2024-11-17 23:24:15.518063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:51.711 [2024-11-17 23:24:15.518069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:51.711 [2024-11-17 23:24:15.518075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:51.711 [2024-11-17 23:24:15.518085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:51.711 [2024-11-17 23:24:15.518091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:51.711 [2024-11-17 23:24:15.518098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:51.711 [2024-11-17 23:24:15.518104] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:51.711 [2024-11-17 23:24:15.518111] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:51.711 [2024-11-17 23:24:15.518120] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:51.711 [2024-11-17 23:24:15.518126] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:51.711 [2024-11-17 23:24:15.518132] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:51.711 [2024-11-17 23:24:15.518138] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:51.711 [2024-11-17 23:24:15.518144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.711 [2024-11-17 23:24:15.518154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:51.711 [2024-11-17 23:24:15.518161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.518 ms 00:18:51.711 [2024-11-17 23:24:15.518168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.711 [2024-11-17 23:24:15.525802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.711 [2024-11-17 23:24:15.525830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:51.711 [2024-11-17 23:24:15.525840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.601 ms 00:18:51.711 [2024-11-17 23:24:15.525848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.711 [2024-11-17 23:24:15.525918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.711 [2024-11-17 23:24:15.525924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:51.711 [2024-11-17 23:24:15.525933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:18:51.711 [2024-11-17 23:24:15.525940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.986 [2024-11-17 23:24:15.542753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.986 [2024-11-17 23:24:15.542786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:51.986 [2024-11-17 23:24:15.542799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.781 ms 00:18:51.986 [2024-11-17 23:24:15.542806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.986 [2024-11-17 23:24:15.542840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.986 [2024-11-17 23:24:15.542848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:51.986 [2024-11-17 23:24:15.542854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:51.986 [2024-11-17 23:24:15.542860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.986 [2024-11-17 23:24:15.543190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.986 [2024-11-17 23:24:15.543214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:51.986 [2024-11-17 23:24:15.543221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:18:51.986 [2024-11-17 23:24:15.543228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.986 [2024-11-17 23:24:15.543321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.986 [2024-11-17 23:24:15.543332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:51.986 [2024-11-17 23:24:15.543341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:18:51.986 [2024-11-17 23:24:15.543349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.986 [2024-11-17 23:24:15.549150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.986 [2024-11-17 23:24:15.549200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:51.986 [2024-11-17 23:24:15.549214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.784 ms 00:18:51.986 [2024-11-17 23:24:15.549226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.986 [2024-11-17 23:24:15.551818] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:51.986 [2024-11-17 23:24:15.551864] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:51.986 [2024-11-17 23:24:15.551899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.986 [2024-11-17 23:24:15.551912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:51.986 [2024-11-17 23:24:15.551924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.561 ms 00:18:51.986 [2024-11-17 23:24:15.551934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.986 [2024-11-17 23:24:15.564394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.986 [2024-11-17 23:24:15.564425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:51.986 [2024-11-17 23:24:15.564435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.395 ms 00:18:51.986 [2024-11-17 23:24:15.564440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.986 [2024-11-17 23:24:15.565698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.986 [2024-11-17 23:24:15.565723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:51.986 [2024-11-17 23:24:15.565729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.232 ms 00:18:51.986 [2024-11-17 23:24:15.565734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.986 [2024-11-17 23:24:15.566832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.986 [2024-11-17 23:24:15.566854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:51.986 [2024-11-17 23:24:15.566860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.074 ms 00:18:51.986 [2024-11-17 23:24:15.566865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.986 [2024-11-17 23:24:15.567108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.986 [2024-11-17 23:24:15.567126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:51.986 [2024-11-17 23:24:15.567133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:18:51.986 [2024-11-17 23:24:15.567142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.986 [2024-11-17 23:24:15.580191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.986 [2024-11-17 23:24:15.580227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:51.986 [2024-11-17 23:24:15.580235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.036 ms 00:18:51.986 [2024-11-17 23:24:15.580241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.986 [2024-11-17 23:24:15.585871] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:51.986 [2024-11-17 23:24:15.587600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.986 [2024-11-17 23:24:15.587624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:51.986 [2024-11-17 23:24:15.587634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.329 ms 00:18:51.986 [2024-11-17 23:24:15.587640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.986 [2024-11-17 23:24:15.587677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.986 [2024-11-17 23:24:15.587689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:51.986 [2024-11-17 23:24:15.587696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:51.986 [2024-11-17 23:24:15.587701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.986 [2024-11-17 23:24:15.587756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.986 [2024-11-17 23:24:15.587765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:51.986 [2024-11-17 23:24:15.587779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:18:51.986 [2024-11-17 23:24:15.587786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.986 [2024-11-17 23:24:15.587803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.986 [2024-11-17 23:24:15.587810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:51.986 [2024-11-17 23:24:15.587816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:51.986 [2024-11-17 23:24:15.587821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.986 [2024-11-17 23:24:15.587845] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:51.986 [2024-11-17 23:24:15.587852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.986 [2024-11-17 23:24:15.587860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:51.986 [2024-11-17 23:24:15.587869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:51.986 [2024-11-17 23:24:15.587875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.986 [2024-11-17 23:24:15.590503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.986 [2024-11-17 23:24:15.590533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:51.986 [2024-11-17 23:24:15.590546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.602 ms 00:18:51.986 [2024-11-17 23:24:15.590552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.986 [2024-11-17 23:24:15.590604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.986 [2024-11-17 23:24:15.590612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:51.986 [2024-11-17 23:24:15.590619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:18:51.986 [2024-11-17 23:24:15.590628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.986 [2024-11-17 23:24:15.591315] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 82.809 ms, result 0 00:18:53.371  [2024-11-17T23:24:18.134Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-17T23:24:19.078Z] Copying: 26/1024 [MB] (16 MBps) [2024-11-17T23:24:20.029Z] Copying: 39/1024 [MB] (12 MBps) [2024-11-17T23:24:20.972Z] Copying: 58/1024 [MB] (19 MBps) [2024-11-17T23:24:21.917Z] Copying: 73/1024 [MB] (14 MBps) [2024-11-17T23:24:22.877Z] Copying: 88/1024 [MB] (15 MBps) [2024-11-17T23:24:23.826Z] Copying: 102/1024 [MB] (13 MBps) [2024-11-17T23:24:24.769Z] Copying: 125/1024 [MB] (23 MBps) [2024-11-17T23:24:26.176Z] Copying: 139/1024 [MB] (14 MBps) [2024-11-17T23:24:27.120Z] Copying: 150/1024 [MB] (10 MBps) [2024-11-17T23:24:28.064Z] Copying: 160/1024 [MB] (10 MBps) [2024-11-17T23:24:29.008Z] Copying: 171/1024 [MB] (10 MBps) [2024-11-17T23:24:29.966Z] Copying: 185/1024 [MB] (14 MBps) [2024-11-17T23:24:30.921Z] Copying: 196/1024 [MB] (11 MBps) [2024-11-17T23:24:31.861Z] Copying: 207/1024 [MB] (10 MBps) [2024-11-17T23:24:32.833Z] Copying: 217/1024 [MB] (10 MBps) [2024-11-17T23:24:33.778Z] Copying: 236/1024 [MB] (18 MBps) [2024-11-17T23:24:35.161Z] Copying: 248/1024 [MB] (12 MBps) [2024-11-17T23:24:36.190Z] Copying: 268/1024 [MB] (20 MBps) [2024-11-17T23:24:37.136Z] Copying: 287/1024 [MB] (19 MBps) [2024-11-17T23:24:38.080Z] Copying: 303/1024 [MB] (16 MBps) [2024-11-17T23:24:39.024Z] Copying: 314/1024 [MB] (10 MBps) [2024-11-17T23:24:39.968Z] Copying: 326/1024 [MB] (11 MBps) [2024-11-17T23:24:40.914Z] Copying: 340/1024 [MB] (14 MBps) [2024-11-17T23:24:41.870Z] Copying: 354/1024 [MB] (14 MBps) [2024-11-17T23:24:42.814Z] Copying: 376/1024 [MB] (21 MBps) [2024-11-17T23:24:44.203Z] Copying: 398/1024 [MB] (22 MBps) [2024-11-17T23:24:44.782Z] Copying: 420/1024 [MB] (21 MBps) [2024-11-17T23:24:46.170Z] Copying: 438/1024 [MB] (17 MBps) [2024-11-17T23:24:47.114Z] Copying: 462/1024 [MB] (24 MBps) [2024-11-17T23:24:48.058Z] Copying: 481/1024 [MB] (19 MBps) [2024-11-17T23:24:49.002Z] Copying: 502/1024 [MB] (21 MBps) [2024-11-17T23:24:49.945Z] Copying: 522/1024 [MB] (19 MBps) [2024-11-17T23:24:50.888Z] Copying: 541/1024 [MB] (19 MBps) [2024-11-17T23:24:51.833Z] Copying: 563/1024 [MB] (21 MBps) [2024-11-17T23:24:52.775Z] Copying: 585/1024 [MB] (22 MBps) [2024-11-17T23:24:54.168Z] Copying: 606/1024 [MB] (21 MBps) [2024-11-17T23:24:55.113Z] Copying: 622/1024 [MB] (15 MBps) [2024-11-17T23:24:56.057Z] Copying: 643/1024 [MB] (20 MBps) [2024-11-17T23:24:57.012Z] Copying: 659/1024 [MB] (16 MBps) [2024-11-17T23:24:57.960Z] Copying: 677/1024 [MB] (18 MBps) [2024-11-17T23:24:58.902Z] Copying: 699/1024 [MB] (21 MBps) [2024-11-17T23:24:59.841Z] Copying: 719/1024 [MB] (20 MBps) [2024-11-17T23:25:00.786Z] Copying: 737/1024 [MB] (17 MBps) [2024-11-17T23:25:02.173Z] Copying: 757/1024 [MB] (20 MBps) [2024-11-17T23:25:03.115Z] Copying: 777/1024 [MB] (19 MBps) [2024-11-17T23:25:04.057Z] Copying: 794/1024 [MB] (16 MBps) [2024-11-17T23:25:05.002Z] Copying: 809/1024 [MB] (14 MBps) [2024-11-17T23:25:05.946Z] Copying: 819/1024 [MB] (10 MBps) [2024-11-17T23:25:06.890Z] Copying: 830/1024 [MB] (10 MBps) [2024-11-17T23:25:07.866Z] Copying: 841/1024 [MB] (10 MBps) [2024-11-17T23:25:08.856Z] Copying: 851/1024 [MB] (10 MBps) [2024-11-17T23:25:09.818Z] Copying: 861/1024 [MB] (10 MBps) [2024-11-17T23:25:11.203Z] Copying: 872/1024 [MB] (10 MBps) [2024-11-17T23:25:11.777Z] Copying: 884/1024 [MB] (12 MBps) [2024-11-17T23:25:13.164Z] Copying: 895/1024 [MB] (10 MBps) [2024-11-17T23:25:14.109Z] Copying: 916/1024 [MB] (20 MBps) [2024-11-17T23:25:15.051Z] Copying: 930/1024 [MB] (14 MBps) [2024-11-17T23:25:15.996Z] Copying: 942/1024 [MB] (11 MBps) [2024-11-17T23:25:16.937Z] Copying: 953/1024 [MB] (10 MBps) [2024-11-17T23:25:17.891Z] Copying: 963/1024 [MB] (10 MBps) [2024-11-17T23:25:18.836Z] Copying: 981/1024 [MB] (17 MBps) [2024-11-17T23:25:19.781Z] Copying: 1000/1024 [MB] (19 MBps) [2024-11-17T23:25:20.725Z] Copying: 1014/1024 [MB] (13 MBps) [2024-11-17T23:25:20.725Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-17 23:25:20.605813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.904 [2024-11-17 23:25:20.605930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:56.904 [2024-11-17 23:25:20.605953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:56.904 [2024-11-17 23:25:20.605962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.904 [2024-11-17 23:25:20.605993] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:56.904 [2024-11-17 23:25:20.606798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.904 [2024-11-17 23:25:20.606838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:56.904 [2024-11-17 23:25:20.606850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.789 ms 00:19:56.904 [2024-11-17 23:25:20.606860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.904 [2024-11-17 23:25:20.607131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.904 [2024-11-17 23:25:20.607149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:56.904 [2024-11-17 23:25:20.607159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:19:56.904 [2024-11-17 23:25:20.607167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.904 [2024-11-17 23:25:20.611241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.904 [2024-11-17 23:25:20.611270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:56.904 [2024-11-17 23:25:20.611281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.054 ms 00:19:56.904 [2024-11-17 23:25:20.611290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.904 [2024-11-17 23:25:20.618783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.904 [2024-11-17 23:25:20.618836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:56.904 [2024-11-17 23:25:20.618848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.469 ms 00:19:56.904 [2024-11-17 23:25:20.618856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.904 [2024-11-17 23:25:20.622227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.904 [2024-11-17 23:25:20.622296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:56.904 [2024-11-17 23:25:20.622309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.283 ms 00:19:56.904 [2024-11-17 23:25:20.622319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.904 [2024-11-17 23:25:20.627171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.904 [2024-11-17 23:25:20.627230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:56.904 [2024-11-17 23:25:20.627243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.802 ms 00:19:56.904 [2024-11-17 23:25:20.627251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.905 [2024-11-17 23:25:20.627380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.905 [2024-11-17 23:25:20.627391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:56.905 [2024-11-17 23:25:20.627413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:19:56.905 [2024-11-17 23:25:20.627422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.905 [2024-11-17 23:25:20.630538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.905 [2024-11-17 23:25:20.630593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:56.905 [2024-11-17 23:25:20.630603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.095 ms 00:19:56.905 [2024-11-17 23:25:20.630611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.905 [2024-11-17 23:25:20.633673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.905 [2024-11-17 23:25:20.633726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:56.905 [2024-11-17 23:25:20.633737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.019 ms 00:19:56.905 [2024-11-17 23:25:20.633744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.905 [2024-11-17 23:25:20.636041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.905 [2024-11-17 23:25:20.636087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:56.905 [2024-11-17 23:25:20.636097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.256 ms 00:19:56.905 [2024-11-17 23:25:20.636104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.905 [2024-11-17 23:25:20.638702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.905 [2024-11-17 23:25:20.638751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:56.905 [2024-11-17 23:25:20.638761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.524 ms 00:19:56.905 [2024-11-17 23:25:20.638769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.905 [2024-11-17 23:25:20.638807] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:56.905 [2024-11-17 23:25:20.638823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.638834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.638843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.638851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.638859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.638867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.638875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.638900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.638908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.638916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.638926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.638934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.638942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.638950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.638958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.638965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.638973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.638981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.638988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.638996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.639003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.639011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.639019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.639026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.639034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.639042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.639049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.639056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.639064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.639071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.639081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.639089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.639096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.639104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.639112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.639119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.639127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.639134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.639142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.639149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.639157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.639164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.639173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.639181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.639202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.639211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.639218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.639226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.639233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.639241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.639249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:56.905 [2024-11-17 23:25:20.639257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:56.906 [2024-11-17 23:25:20.639646] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:56.906 [2024-11-17 23:25:20.639654] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e1a16e31-7428-4c79-b749-82ad6e64cef8 00:19:56.906 [2024-11-17 23:25:20.639663] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:56.906 [2024-11-17 23:25:20.639671] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:56.906 [2024-11-17 23:25:20.639679] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:56.906 [2024-11-17 23:25:20.639687] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:56.906 [2024-11-17 23:25:20.639694] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:56.906 [2024-11-17 23:25:20.639707] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:56.906 [2024-11-17 23:25:20.639715] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:56.906 [2024-11-17 23:25:20.639722] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:56.906 [2024-11-17 23:25:20.639729] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:56.906 [2024-11-17 23:25:20.639736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.906 [2024-11-17 23:25:20.639748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:56.906 [2024-11-17 23:25:20.639770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.930 ms 00:19:56.906 [2024-11-17 23:25:20.639778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.906 [2024-11-17 23:25:20.642139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.906 [2024-11-17 23:25:20.642179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:56.906 [2024-11-17 23:25:20.642190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.342 ms 00:19:56.906 [2024-11-17 23:25:20.642199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.906 [2024-11-17 23:25:20.642339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.906 [2024-11-17 23:25:20.642350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:56.906 [2024-11-17 23:25:20.642359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:19:56.906 [2024-11-17 23:25:20.642367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.906 [2024-11-17 23:25:20.649988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.906 [2024-11-17 23:25:20.650042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:56.906 [2024-11-17 23:25:20.650053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.906 [2024-11-17 23:25:20.650061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.906 [2024-11-17 23:25:20.650120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.906 [2024-11-17 23:25:20.650129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:56.906 [2024-11-17 23:25:20.650137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.907 [2024-11-17 23:25:20.650144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.907 [2024-11-17 23:25:20.650213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.907 [2024-11-17 23:25:20.650224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:56.907 [2024-11-17 23:25:20.650233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.907 [2024-11-17 23:25:20.650241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.907 [2024-11-17 23:25:20.650262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.907 [2024-11-17 23:25:20.650272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:56.907 [2024-11-17 23:25:20.650281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.907 [2024-11-17 23:25:20.650293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.907 [2024-11-17 23:25:20.664224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.907 [2024-11-17 23:25:20.664283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:56.907 [2024-11-17 23:25:20.664294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.907 [2024-11-17 23:25:20.664303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.907 [2024-11-17 23:25:20.675053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.907 [2024-11-17 23:25:20.675116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:56.907 [2024-11-17 23:25:20.675128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.907 [2024-11-17 23:25:20.675136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.907 [2024-11-17 23:25:20.675186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.907 [2024-11-17 23:25:20.675195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:56.907 [2024-11-17 23:25:20.675204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.907 [2024-11-17 23:25:20.675212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.907 [2024-11-17 23:25:20.675247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.907 [2024-11-17 23:25:20.675257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:56.907 [2024-11-17 23:25:20.675265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.907 [2024-11-17 23:25:20.675282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.907 [2024-11-17 23:25:20.675355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.907 [2024-11-17 23:25:20.675365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:56.907 [2024-11-17 23:25:20.675375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.907 [2024-11-17 23:25:20.675383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.907 [2024-11-17 23:25:20.675416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.907 [2024-11-17 23:25:20.675425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:56.907 [2024-11-17 23:25:20.675433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.907 [2024-11-17 23:25:20.675445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.907 [2024-11-17 23:25:20.675487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.907 [2024-11-17 23:25:20.675496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:56.907 [2024-11-17 23:25:20.675505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.907 [2024-11-17 23:25:20.675517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.907 [2024-11-17 23:25:20.675566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.907 [2024-11-17 23:25:20.675577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:56.907 [2024-11-17 23:25:20.675586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.907 [2024-11-17 23:25:20.675597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.907 [2024-11-17 23:25:20.675735] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 69.884 ms, result 0 00:19:57.167 00:19:57.167 00:19:57.167 23:25:20 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:59.712 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:19:59.712 23:25:23 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:19:59.713 [2024-11-17 23:25:23.069613] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:19:59.713 [2024-11-17 23:25:23.069889] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86989 ] 00:19:59.713 [2024-11-17 23:25:23.215961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.713 [2024-11-17 23:25:23.236397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.713 [2024-11-17 23:25:23.327625] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:59.713 [2024-11-17 23:25:23.327684] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:59.713 [2024-11-17 23:25:23.484356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.713 [2024-11-17 23:25:23.484398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:59.713 [2024-11-17 23:25:23.484411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:59.713 [2024-11-17 23:25:23.484418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.713 [2024-11-17 23:25:23.484471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.713 [2024-11-17 23:25:23.484481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:59.713 [2024-11-17 23:25:23.484490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:59.713 [2024-11-17 23:25:23.484497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.713 [2024-11-17 23:25:23.484516] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:59.713 [2024-11-17 23:25:23.484827] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:59.713 [2024-11-17 23:25:23.484859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.713 [2024-11-17 23:25:23.484867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:59.713 [2024-11-17 23:25:23.484876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.347 ms 00:19:59.713 [2024-11-17 23:25:23.484901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.713 [2024-11-17 23:25:23.486110] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:59.713 [2024-11-17 23:25:23.488766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.713 [2024-11-17 23:25:23.488798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:59.713 [2024-11-17 23:25:23.488813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.659 ms 00:19:59.713 [2024-11-17 23:25:23.488821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.713 [2024-11-17 23:25:23.488876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.713 [2024-11-17 23:25:23.488904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:59.713 [2024-11-17 23:25:23.488913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:59.713 [2024-11-17 23:25:23.488920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.713 [2024-11-17 23:25:23.494189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.713 [2024-11-17 23:25:23.494218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:59.713 [2024-11-17 23:25:23.494226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.211 ms 00:19:59.713 [2024-11-17 23:25:23.494236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.713 [2024-11-17 23:25:23.494313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.713 [2024-11-17 23:25:23.494322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:59.713 [2024-11-17 23:25:23.494330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:19:59.713 [2024-11-17 23:25:23.494339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.713 [2024-11-17 23:25:23.494379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.713 [2024-11-17 23:25:23.494388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:59.713 [2024-11-17 23:25:23.494396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:59.713 [2024-11-17 23:25:23.494403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.713 [2024-11-17 23:25:23.494428] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:59.713 [2024-11-17 23:25:23.495821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.713 [2024-11-17 23:25:23.495845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:59.713 [2024-11-17 23:25:23.495854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.398 ms 00:19:59.713 [2024-11-17 23:25:23.495862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.713 [2024-11-17 23:25:23.495909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.713 [2024-11-17 23:25:23.495924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:59.713 [2024-11-17 23:25:23.495932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:59.713 [2024-11-17 23:25:23.495939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.713 [2024-11-17 23:25:23.495960] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:59.713 [2024-11-17 23:25:23.495982] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:59.713 [2024-11-17 23:25:23.496021] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:59.713 [2024-11-17 23:25:23.496038] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:59.713 [2024-11-17 23:25:23.496144] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:59.713 [2024-11-17 23:25:23.496154] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:59.713 [2024-11-17 23:25:23.496164] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:59.713 [2024-11-17 23:25:23.496176] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:59.713 [2024-11-17 23:25:23.496185] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:59.713 [2024-11-17 23:25:23.496192] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:59.713 [2024-11-17 23:25:23.496199] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:59.713 [2024-11-17 23:25:23.496206] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:59.713 [2024-11-17 23:25:23.496216] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:59.713 [2024-11-17 23:25:23.496224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.713 [2024-11-17 23:25:23.496232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:59.713 [2024-11-17 23:25:23.496242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:19:59.713 [2024-11-17 23:25:23.496252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.713 [2024-11-17 23:25:23.496335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.713 [2024-11-17 23:25:23.496345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:59.713 [2024-11-17 23:25:23.496352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:19:59.713 [2024-11-17 23:25:23.496362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.713 [2024-11-17 23:25:23.496460] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:59.713 [2024-11-17 23:25:23.496469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:59.713 [2024-11-17 23:25:23.496478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:59.713 [2024-11-17 23:25:23.496487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:59.713 [2024-11-17 23:25:23.496495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:59.713 [2024-11-17 23:25:23.496502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:59.713 [2024-11-17 23:25:23.496510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:59.713 [2024-11-17 23:25:23.496518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:59.713 [2024-11-17 23:25:23.496527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:59.713 [2024-11-17 23:25:23.496534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:59.713 [2024-11-17 23:25:23.496542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:59.713 [2024-11-17 23:25:23.496553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:59.713 [2024-11-17 23:25:23.496561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:59.713 [2024-11-17 23:25:23.496568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:59.713 [2024-11-17 23:25:23.496578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:59.713 [2024-11-17 23:25:23.496586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:59.713 [2024-11-17 23:25:23.496594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:59.713 [2024-11-17 23:25:23.496601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:59.713 [2024-11-17 23:25:23.496609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:59.713 [2024-11-17 23:25:23.496617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:59.713 [2024-11-17 23:25:23.496624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:59.713 [2024-11-17 23:25:23.496631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:59.713 [2024-11-17 23:25:23.496638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:59.713 [2024-11-17 23:25:23.496646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:59.713 [2024-11-17 23:25:23.496653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:59.713 [2024-11-17 23:25:23.496661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:59.713 [2024-11-17 23:25:23.496668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:59.713 [2024-11-17 23:25:23.496679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:59.713 [2024-11-17 23:25:23.496687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:59.713 [2024-11-17 23:25:23.496694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:59.713 [2024-11-17 23:25:23.496701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:59.713 [2024-11-17 23:25:23.496709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:59.713 [2024-11-17 23:25:23.496716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:59.713 [2024-11-17 23:25:23.496723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:59.713 [2024-11-17 23:25:23.496731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:59.713 [2024-11-17 23:25:23.496738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:59.713 [2024-11-17 23:25:23.496746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:59.713 [2024-11-17 23:25:23.496753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:59.713 [2024-11-17 23:25:23.496760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:59.713 [2024-11-17 23:25:23.496767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:59.713 [2024-11-17 23:25:23.496775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:59.713 [2024-11-17 23:25:23.496782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:59.713 [2024-11-17 23:25:23.496789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:59.713 [2024-11-17 23:25:23.496798] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:59.713 [2024-11-17 23:25:23.496807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:59.713 [2024-11-17 23:25:23.496817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:59.713 [2024-11-17 23:25:23.496826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:59.713 [2024-11-17 23:25:23.496834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:59.713 [2024-11-17 23:25:23.496842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:59.713 [2024-11-17 23:25:23.496849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:59.713 [2024-11-17 23:25:23.496856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:59.713 [2024-11-17 23:25:23.496862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:59.713 [2024-11-17 23:25:23.496868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:59.713 [2024-11-17 23:25:23.496876] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:59.713 [2024-11-17 23:25:23.496917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:59.713 [2024-11-17 23:25:23.496929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:59.713 [2024-11-17 23:25:23.496937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:59.713 [2024-11-17 23:25:23.496944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:59.713 [2024-11-17 23:25:23.496951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:59.713 [2024-11-17 23:25:23.496960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:59.713 [2024-11-17 23:25:23.496967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:59.713 [2024-11-17 23:25:23.496974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:59.713 [2024-11-17 23:25:23.496982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:59.713 [2024-11-17 23:25:23.496989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:59.713 [2024-11-17 23:25:23.496996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:59.713 [2024-11-17 23:25:23.497003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:59.713 [2024-11-17 23:25:23.497014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:59.713 [2024-11-17 23:25:23.497022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:59.713 [2024-11-17 23:25:23.497029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:59.713 [2024-11-17 23:25:23.497036] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:59.713 [2024-11-17 23:25:23.497047] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:59.713 [2024-11-17 23:25:23.497055] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:59.713 [2024-11-17 23:25:23.497062] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:59.713 [2024-11-17 23:25:23.497069] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:59.713 [2024-11-17 23:25:23.497077] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:59.713 [2024-11-17 23:25:23.497086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.713 [2024-11-17 23:25:23.497093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:59.713 [2024-11-17 23:25:23.497100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.696 ms 00:19:59.714 [2024-11-17 23:25:23.497108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.714 [2024-11-17 23:25:23.506605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.714 [2024-11-17 23:25:23.506639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:59.714 [2024-11-17 23:25:23.506649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.455 ms 00:19:59.714 [2024-11-17 23:25:23.506656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.714 [2024-11-17 23:25:23.506737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.714 [2024-11-17 23:25:23.506746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:59.714 [2024-11-17 23:25:23.506754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:19:59.714 [2024-11-17 23:25:23.506761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.714 [2024-11-17 23:25:23.529818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.714 [2024-11-17 23:25:23.529922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:59.714 [2024-11-17 23:25:23.529953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.003 ms 00:19:59.975 [2024-11-17 23:25:23.529973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.975 [2024-11-17 23:25:23.530064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.975 [2024-11-17 23:25:23.530100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:59.975 [2024-11-17 23:25:23.530123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:59.975 [2024-11-17 23:25:23.530142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.975 [2024-11-17 23:25:23.530743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.975 [2024-11-17 23:25:23.530804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:59.975 [2024-11-17 23:25:23.530831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.466 ms 00:19:59.975 [2024-11-17 23:25:23.530854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.975 [2024-11-17 23:25:23.531197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.975 [2024-11-17 23:25:23.531235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:59.975 [2024-11-17 23:25:23.531257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:19:59.975 [2024-11-17 23:25:23.531277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.975 [2024-11-17 23:25:23.537353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.975 [2024-11-17 23:25:23.537385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:59.975 [2024-11-17 23:25:23.537395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.032 ms 00:19:59.975 [2024-11-17 23:25:23.537402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.975 [2024-11-17 23:25:23.540143] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:59.975 [2024-11-17 23:25:23.540182] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:59.975 [2024-11-17 23:25:23.540192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.975 [2024-11-17 23:25:23.540201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:59.975 [2024-11-17 23:25:23.540209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.707 ms 00:19:59.975 [2024-11-17 23:25:23.540216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.975 [2024-11-17 23:25:23.554958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.975 [2024-11-17 23:25:23.554995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:59.975 [2024-11-17 23:25:23.555005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.705 ms 00:19:59.975 [2024-11-17 23:25:23.555013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.975 [2024-11-17 23:25:23.557180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.975 [2024-11-17 23:25:23.557209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:59.975 [2024-11-17 23:25:23.557218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.129 ms 00:19:59.975 [2024-11-17 23:25:23.557225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.976 [2024-11-17 23:25:23.559228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.976 [2024-11-17 23:25:23.559258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:59.976 [2024-11-17 23:25:23.559267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.970 ms 00:19:59.976 [2024-11-17 23:25:23.559274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.976 [2024-11-17 23:25:23.559601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.976 [2024-11-17 23:25:23.559619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:59.976 [2024-11-17 23:25:23.559628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:19:59.976 [2024-11-17 23:25:23.559635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.976 [2024-11-17 23:25:23.577520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.976 [2024-11-17 23:25:23.577569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:59.976 [2024-11-17 23:25:23.577579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.869 ms 00:19:59.976 [2024-11-17 23:25:23.577591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.976 [2024-11-17 23:25:23.585191] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:59.976 [2024-11-17 23:25:23.587588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.976 [2024-11-17 23:25:23.587615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:59.976 [2024-11-17 23:25:23.587635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.958 ms 00:19:59.976 [2024-11-17 23:25:23.587644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.976 [2024-11-17 23:25:23.587730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.976 [2024-11-17 23:25:23.587740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:59.976 [2024-11-17 23:25:23.587749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:59.976 [2024-11-17 23:25:23.587756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.976 [2024-11-17 23:25:23.587825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.976 [2024-11-17 23:25:23.587835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:59.976 [2024-11-17 23:25:23.587846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:19:59.976 [2024-11-17 23:25:23.587857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.976 [2024-11-17 23:25:23.587895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.976 [2024-11-17 23:25:23.587916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:59.976 [2024-11-17 23:25:23.587924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:59.976 [2024-11-17 23:25:23.587931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.976 [2024-11-17 23:25:23.587964] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:59.976 [2024-11-17 23:25:23.587976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.976 [2024-11-17 23:25:23.587983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:59.976 [2024-11-17 23:25:23.587991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:59.976 [2024-11-17 23:25:23.588000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.976 [2024-11-17 23:25:23.591865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.976 [2024-11-17 23:25:23.591924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:59.976 [2024-11-17 23:25:23.591942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.848 ms 00:19:59.976 [2024-11-17 23:25:23.591950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.976 [2024-11-17 23:25:23.592016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.976 [2024-11-17 23:25:23.592026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:59.976 [2024-11-17 23:25:23.592038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:59.976 [2024-11-17 23:25:23.592047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.976 [2024-11-17 23:25:23.592988] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 108.217 ms, result 0 00:20:00.920  [2024-11-17T23:25:25.685Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-17T23:25:26.628Z] Copying: 37/1024 [MB] (19 MBps) [2024-11-17T23:25:28.014Z] Copying: 52/1024 [MB] (15 MBps) [2024-11-17T23:25:28.955Z] Copying: 63/1024 [MB] (10 MBps) [2024-11-17T23:25:29.898Z] Copying: 74/1024 [MB] (10 MBps) [2024-11-17T23:25:30.841Z] Copying: 104/1024 [MB] (29 MBps) [2024-11-17T23:25:31.785Z] Copying: 121/1024 [MB] (17 MBps) [2024-11-17T23:25:32.729Z] Copying: 140/1024 [MB] (19 MBps) [2024-11-17T23:25:33.674Z] Copying: 153/1024 [MB] (13 MBps) [2024-11-17T23:25:34.618Z] Copying: 167/1024 [MB] (13 MBps) [2024-11-17T23:25:36.076Z] Copying: 185/1024 [MB] (18 MBps) [2024-11-17T23:25:36.648Z] Copying: 205/1024 [MB] (19 MBps) [2024-11-17T23:25:37.614Z] Copying: 221/1024 [MB] (15 MBps) [2024-11-17T23:25:38.998Z] Copying: 237/1024 [MB] (16 MBps) [2024-11-17T23:25:39.941Z] Copying: 252/1024 [MB] (14 MBps) [2024-11-17T23:25:40.886Z] Copying: 281/1024 [MB] (29 MBps) [2024-11-17T23:25:41.829Z] Copying: 293/1024 [MB] (12 MBps) [2024-11-17T23:25:42.773Z] Copying: 309/1024 [MB] (16 MBps) [2024-11-17T23:25:43.716Z] Copying: 323/1024 [MB] (13 MBps) [2024-11-17T23:25:44.658Z] Copying: 342/1024 [MB] (19 MBps) [2024-11-17T23:25:45.617Z] Copying: 357/1024 [MB] (15 MBps) [2024-11-17T23:25:47.013Z] Copying: 368/1024 [MB] (10 MBps) [2024-11-17T23:25:47.959Z] Copying: 387672/1048576 [kB] (10232 kBps) [2024-11-17T23:25:48.902Z] Copying: 390/1024 [MB] (12 MBps) [2024-11-17T23:25:49.849Z] Copying: 426/1024 [MB] (35 MBps) [2024-11-17T23:25:50.793Z] Copying: 466/1024 [MB] (39 MBps) [2024-11-17T23:25:51.735Z] Copying: 493/1024 [MB] (26 MBps) [2024-11-17T23:25:52.680Z] Copying: 522/1024 [MB] (28 MBps) [2024-11-17T23:25:53.623Z] Copying: 537/1024 [MB] (14 MBps) [2024-11-17T23:25:55.010Z] Copying: 564/1024 [MB] (27 MBps) [2024-11-17T23:25:55.952Z] Copying: 579/1024 [MB] (15 MBps) [2024-11-17T23:25:56.907Z] Copying: 600/1024 [MB] (21 MBps) [2024-11-17T23:25:57.850Z] Copying: 617/1024 [MB] (17 MBps) [2024-11-17T23:25:58.794Z] Copying: 634/1024 [MB] (16 MBps) [2024-11-17T23:25:59.742Z] Copying: 654/1024 [MB] (19 MBps) [2024-11-17T23:26:00.685Z] Copying: 676/1024 [MB] (21 MBps) [2024-11-17T23:26:01.629Z] Copying: 694/1024 [MB] (18 MBps) [2024-11-17T23:26:03.015Z] Copying: 715/1024 [MB] (20 MBps) [2024-11-17T23:26:03.957Z] Copying: 730/1024 [MB] (14 MBps) [2024-11-17T23:26:04.901Z] Copying: 751/1024 [MB] (21 MBps) [2024-11-17T23:26:05.925Z] Copying: 768/1024 [MB] (17 MBps) [2024-11-17T23:26:06.870Z] Copying: 786/1024 [MB] (17 MBps) [2024-11-17T23:26:07.814Z] Copying: 805/1024 [MB] (19 MBps) [2024-11-17T23:26:08.758Z] Copying: 816/1024 [MB] (10 MBps) [2024-11-17T23:26:09.700Z] Copying: 834/1024 [MB] (18 MBps) [2024-11-17T23:26:10.644Z] Copying: 884/1024 [MB] (50 MBps) [2024-11-17T23:26:12.033Z] Copying: 900/1024 [MB] (15 MBps) [2024-11-17T23:26:12.977Z] Copying: 920/1024 [MB] (20 MBps) [2024-11-17T23:26:13.921Z] Copying: 939/1024 [MB] (19 MBps) [2024-11-17T23:26:14.866Z] Copying: 952/1024 [MB] (12 MBps) [2024-11-17T23:26:15.810Z] Copying: 984904/1048576 [kB] (10056 kBps) [2024-11-17T23:26:16.752Z] Copying: 986/1024 [MB] (24 MBps) [2024-11-17T23:26:17.696Z] Copying: 1010/1024 [MB] (24 MBps) [2024-11-17T23:26:17.957Z] Copying: 1023/1024 [MB] (13 MBps) [2024-11-17T23:26:17.957Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-17 23:26:17.928466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.136 [2024-11-17 23:26:17.928547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:54.136 [2024-11-17 23:26:17.928564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:54.136 [2024-11-17 23:26:17.928573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.136 [2024-11-17 23:26:17.931637] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:54.136 [2024-11-17 23:26:17.933017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.136 [2024-11-17 23:26:17.933071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:54.136 [2024-11-17 23:26:17.933085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.318 ms 00:20:54.136 [2024-11-17 23:26:17.933103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.136 [2024-11-17 23:26:17.946996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.136 [2024-11-17 23:26:17.947051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:54.136 [2024-11-17 23:26:17.947067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.615 ms 00:20:54.136 [2024-11-17 23:26:17.947076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.398 [2024-11-17 23:26:17.970386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.398 [2024-11-17 23:26:17.970444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:54.398 [2024-11-17 23:26:17.970458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.279 ms 00:20:54.398 [2024-11-17 23:26:17.970468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.398 [2024-11-17 23:26:17.976676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.398 [2024-11-17 23:26:17.976735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:54.398 [2024-11-17 23:26:17.976748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.149 ms 00:20:54.398 [2024-11-17 23:26:17.976761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.398 [2024-11-17 23:26:17.979479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.398 [2024-11-17 23:26:17.979532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:54.398 [2024-11-17 23:26:17.979543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.676 ms 00:20:54.398 [2024-11-17 23:26:17.979552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.398 [2024-11-17 23:26:17.984267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.398 [2024-11-17 23:26:17.984322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:54.398 [2024-11-17 23:26:17.984336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.670 ms 00:20:54.398 [2024-11-17 23:26:17.984346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.661 [2024-11-17 23:26:18.266150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.661 [2024-11-17 23:26:18.266208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:54.661 [2024-11-17 23:26:18.266222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 281.736 ms 00:20:54.661 [2024-11-17 23:26:18.266232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.661 [2024-11-17 23:26:18.270015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.661 [2024-11-17 23:26:18.270067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:54.661 [2024-11-17 23:26:18.270079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.764 ms 00:20:54.661 [2024-11-17 23:26:18.270087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.661 [2024-11-17 23:26:18.273208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.661 [2024-11-17 23:26:18.273259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:54.661 [2024-11-17 23:26:18.273270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.072 ms 00:20:54.661 [2024-11-17 23:26:18.273278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.661 [2024-11-17 23:26:18.275657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.661 [2024-11-17 23:26:18.275708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:54.661 [2024-11-17 23:26:18.275719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.333 ms 00:20:54.661 [2024-11-17 23:26:18.275727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.661 [2024-11-17 23:26:18.278221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.661 [2024-11-17 23:26:18.278274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:54.661 [2024-11-17 23:26:18.278287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.421 ms 00:20:54.661 [2024-11-17 23:26:18.278295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.661 [2024-11-17 23:26:18.278337] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:54.661 [2024-11-17 23:26:18.278352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 98816 / 261120 wr_cnt: 1 state: open 00:20:54.661 [2024-11-17 23:26:18.278363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:54.661 [2024-11-17 23:26:18.278372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:54.661 [2024-11-17 23:26:18.278380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:54.661 [2024-11-17 23:26:18.278389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:54.661 [2024-11-17 23:26:18.278397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:54.661 [2024-11-17 23:26:18.278406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:54.661 [2024-11-17 23:26:18.278415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:54.661 [2024-11-17 23:26:18.278426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:54.661 [2024-11-17 23:26:18.278434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:54.661 [2024-11-17 23:26:18.278442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:54.661 [2024-11-17 23:26:18.278451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:54.661 [2024-11-17 23:26:18.278460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:54.661 [2024-11-17 23:26:18.278468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:54.661 [2024-11-17 23:26:18.278477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:54.661 [2024-11-17 23:26:18.278485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:54.661 [2024-11-17 23:26:18.278493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.278992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.279000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.279009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.279017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.279025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.279033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.279042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.279050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.279059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.279068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.279076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.279084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.279092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.279104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.279114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.279122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.279131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.279139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.279147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.279157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.279164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.279172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.279179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.279187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.279198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:54.662 [2024-11-17 23:26:18.279207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:54.663 [2024-11-17 23:26:18.279214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:54.663 [2024-11-17 23:26:18.279222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:54.663 [2024-11-17 23:26:18.279235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:54.663 [2024-11-17 23:26:18.279252] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:54.663 [2024-11-17 23:26:18.279263] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e1a16e31-7428-4c79-b749-82ad6e64cef8 00:20:54.663 [2024-11-17 23:26:18.279272] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 98816 00:20:54.663 [2024-11-17 23:26:18.279281] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 99776 00:20:54.663 [2024-11-17 23:26:18.279294] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 98816 00:20:54.663 [2024-11-17 23:26:18.279310] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0097 00:20:54.663 [2024-11-17 23:26:18.279323] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:54.663 [2024-11-17 23:26:18.279332] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:54.663 [2024-11-17 23:26:18.279340] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:54.663 [2024-11-17 23:26:18.279348] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:54.663 [2024-11-17 23:26:18.279356] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:54.663 [2024-11-17 23:26:18.279363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.663 [2024-11-17 23:26:18.279371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:54.663 [2024-11-17 23:26:18.279380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.027 ms 00:20:54.663 [2024-11-17 23:26:18.279388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.663 [2024-11-17 23:26:18.281871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.663 [2024-11-17 23:26:18.281941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:54.663 [2024-11-17 23:26:18.281957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.462 ms 00:20:54.663 [2024-11-17 23:26:18.281966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.663 [2024-11-17 23:26:18.282095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.663 [2024-11-17 23:26:18.282105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:54.663 [2024-11-17 23:26:18.282114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:20:54.663 [2024-11-17 23:26:18.282122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.663 [2024-11-17 23:26:18.289966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.663 [2024-11-17 23:26:18.290017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:54.663 [2024-11-17 23:26:18.290029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.663 [2024-11-17 23:26:18.290038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.663 [2024-11-17 23:26:18.290099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.663 [2024-11-17 23:26:18.290108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:54.663 [2024-11-17 23:26:18.290117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.663 [2024-11-17 23:26:18.290125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.663 [2024-11-17 23:26:18.290232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.663 [2024-11-17 23:26:18.290246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:54.663 [2024-11-17 23:26:18.290255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.663 [2024-11-17 23:26:18.290267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.663 [2024-11-17 23:26:18.290288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.663 [2024-11-17 23:26:18.290301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:54.663 [2024-11-17 23:26:18.290309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.663 [2024-11-17 23:26:18.290317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.663 [2024-11-17 23:26:18.304099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.663 [2024-11-17 23:26:18.304148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:54.663 [2024-11-17 23:26:18.304160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.663 [2024-11-17 23:26:18.304168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.663 [2024-11-17 23:26:18.314307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.663 [2024-11-17 23:26:18.314355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:54.663 [2024-11-17 23:26:18.314366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.663 [2024-11-17 23:26:18.314375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.663 [2024-11-17 23:26:18.314424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.663 [2024-11-17 23:26:18.314441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:54.663 [2024-11-17 23:26:18.314450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.663 [2024-11-17 23:26:18.314458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.663 [2024-11-17 23:26:18.314493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.663 [2024-11-17 23:26:18.314501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:54.663 [2024-11-17 23:26:18.314510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.663 [2024-11-17 23:26:18.314518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.663 [2024-11-17 23:26:18.314586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.663 [2024-11-17 23:26:18.314599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:54.663 [2024-11-17 23:26:18.314617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.663 [2024-11-17 23:26:18.314625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.663 [2024-11-17 23:26:18.314659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.663 [2024-11-17 23:26:18.314668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:54.663 [2024-11-17 23:26:18.314676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.663 [2024-11-17 23:26:18.314683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.663 [2024-11-17 23:26:18.314724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.663 [2024-11-17 23:26:18.314737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:54.663 [2024-11-17 23:26:18.314749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.663 [2024-11-17 23:26:18.314756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.663 [2024-11-17 23:26:18.314801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.663 [2024-11-17 23:26:18.314812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:54.663 [2024-11-17 23:26:18.314821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.663 [2024-11-17 23:26:18.314830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.663 [2024-11-17 23:26:18.314982] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 389.369 ms, result 0 00:20:55.235 00:20:55.235 00:20:55.500 23:26:19 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:20:55.500 [2024-11-17 23:26:19.141726] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:20:55.500 [2024-11-17 23:26:19.141915] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87565 ] 00:20:55.500 [2024-11-17 23:26:19.290539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.762 [2024-11-17 23:26:19.319452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:55.762 [2024-11-17 23:26:19.427931] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:55.762 [2024-11-17 23:26:19.428001] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:56.025 [2024-11-17 23:26:19.589471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.025 [2024-11-17 23:26:19.589537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:56.025 [2024-11-17 23:26:19.589553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:56.025 [2024-11-17 23:26:19.589563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.025 [2024-11-17 23:26:19.589621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.025 [2024-11-17 23:26:19.589632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:56.025 [2024-11-17 23:26:19.589642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:56.025 [2024-11-17 23:26:19.589650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.025 [2024-11-17 23:26:19.589674] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:56.025 [2024-11-17 23:26:19.590033] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:56.025 [2024-11-17 23:26:19.590068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.025 [2024-11-17 23:26:19.590078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:56.025 [2024-11-17 23:26:19.590089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:20:56.025 [2024-11-17 23:26:19.590104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.025 [2024-11-17 23:26:19.591840] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:56.025 [2024-11-17 23:26:19.595830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.025 [2024-11-17 23:26:19.595901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:56.025 [2024-11-17 23:26:19.595914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.992 ms 00:20:56.025 [2024-11-17 23:26:19.595923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.025 [2024-11-17 23:26:19.596014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.025 [2024-11-17 23:26:19.596027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:56.025 [2024-11-17 23:26:19.596037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:56.025 [2024-11-17 23:26:19.596073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.025 [2024-11-17 23:26:19.604512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.025 [2024-11-17 23:26:19.604559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:56.025 [2024-11-17 23:26:19.604570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.390 ms 00:20:56.025 [2024-11-17 23:26:19.604585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.025 [2024-11-17 23:26:19.604683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.025 [2024-11-17 23:26:19.604693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:56.025 [2024-11-17 23:26:19.604703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:20:56.025 [2024-11-17 23:26:19.604723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.025 [2024-11-17 23:26:19.604789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.025 [2024-11-17 23:26:19.604802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:56.025 [2024-11-17 23:26:19.604816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:56.025 [2024-11-17 23:26:19.604825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.025 [2024-11-17 23:26:19.604850] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:56.025 [2024-11-17 23:26:19.606794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.025 [2024-11-17 23:26:19.606835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:56.025 [2024-11-17 23:26:19.606846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.950 ms 00:20:56.025 [2024-11-17 23:26:19.606855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.025 [2024-11-17 23:26:19.606912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.025 [2024-11-17 23:26:19.606926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:56.025 [2024-11-17 23:26:19.606935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:56.025 [2024-11-17 23:26:19.606943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.025 [2024-11-17 23:26:19.606970] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:56.025 [2024-11-17 23:26:19.606992] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:56.025 [2024-11-17 23:26:19.607036] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:56.025 [2024-11-17 23:26:19.607056] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:56.025 [2024-11-17 23:26:19.607162] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:56.025 [2024-11-17 23:26:19.607176] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:56.025 [2024-11-17 23:26:19.607188] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:56.025 [2024-11-17 23:26:19.607201] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:56.025 [2024-11-17 23:26:19.607216] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:56.025 [2024-11-17 23:26:19.607227] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:56.025 [2024-11-17 23:26:19.607236] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:56.025 [2024-11-17 23:26:19.607244] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:56.025 [2024-11-17 23:26:19.607255] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:56.025 [2024-11-17 23:26:19.607265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.025 [2024-11-17 23:26:19.607273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:56.025 [2024-11-17 23:26:19.607284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:20:56.025 [2024-11-17 23:26:19.607299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.025 [2024-11-17 23:26:19.607381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.025 [2024-11-17 23:26:19.607393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:56.025 [2024-11-17 23:26:19.607402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:56.025 [2024-11-17 23:26:19.607409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.025 [2024-11-17 23:26:19.607507] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:56.025 [2024-11-17 23:26:19.607520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:56.025 [2024-11-17 23:26:19.607531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:56.025 [2024-11-17 23:26:19.607540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:56.025 [2024-11-17 23:26:19.607549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:56.025 [2024-11-17 23:26:19.607557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:56.025 [2024-11-17 23:26:19.607567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:56.025 [2024-11-17 23:26:19.607577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:56.025 [2024-11-17 23:26:19.607585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:56.025 [2024-11-17 23:26:19.607594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:56.025 [2024-11-17 23:26:19.607606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:56.025 [2024-11-17 23:26:19.607615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:56.025 [2024-11-17 23:26:19.607625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:56.025 [2024-11-17 23:26:19.607633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:56.025 [2024-11-17 23:26:19.607641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:56.025 [2024-11-17 23:26:19.607649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:56.025 [2024-11-17 23:26:19.607658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:56.026 [2024-11-17 23:26:19.607667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:56.026 [2024-11-17 23:26:19.607677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:56.026 [2024-11-17 23:26:19.607686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:56.026 [2024-11-17 23:26:19.607693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:56.026 [2024-11-17 23:26:19.607704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:56.026 [2024-11-17 23:26:19.607712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:56.026 [2024-11-17 23:26:19.607720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:56.026 [2024-11-17 23:26:19.607728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:56.026 [2024-11-17 23:26:19.607737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:56.026 [2024-11-17 23:26:19.607751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:56.026 [2024-11-17 23:26:19.607759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:56.026 [2024-11-17 23:26:19.607767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:56.026 [2024-11-17 23:26:19.607774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:56.026 [2024-11-17 23:26:19.607783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:56.026 [2024-11-17 23:26:19.607791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:56.026 [2024-11-17 23:26:19.607798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:56.026 [2024-11-17 23:26:19.607806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:56.026 [2024-11-17 23:26:19.607814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:56.026 [2024-11-17 23:26:19.607820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:56.026 [2024-11-17 23:26:19.607826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:56.026 [2024-11-17 23:26:19.607834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:56.026 [2024-11-17 23:26:19.607840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:56.026 [2024-11-17 23:26:19.607847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:56.026 [2024-11-17 23:26:19.607853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:56.026 [2024-11-17 23:26:19.607860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:56.026 [2024-11-17 23:26:19.607869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:56.026 [2024-11-17 23:26:19.607877] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:56.026 [2024-11-17 23:26:19.607901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:56.026 [2024-11-17 23:26:19.607911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:56.026 [2024-11-17 23:26:19.607925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:56.026 [2024-11-17 23:26:19.607936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:56.026 [2024-11-17 23:26:19.607944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:56.026 [2024-11-17 23:26:19.607952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:56.026 [2024-11-17 23:26:19.607959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:56.026 [2024-11-17 23:26:19.607967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:56.026 [2024-11-17 23:26:19.607973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:56.026 [2024-11-17 23:26:19.607981] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:56.026 [2024-11-17 23:26:19.607991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:56.026 [2024-11-17 23:26:19.608005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:56.026 [2024-11-17 23:26:19.608014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:56.026 [2024-11-17 23:26:19.608022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:56.026 [2024-11-17 23:26:19.608032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:56.026 [2024-11-17 23:26:19.608072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:56.026 [2024-11-17 23:26:19.608082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:56.026 [2024-11-17 23:26:19.608090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:56.026 [2024-11-17 23:26:19.608098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:56.026 [2024-11-17 23:26:19.608105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:56.026 [2024-11-17 23:26:19.608112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:56.026 [2024-11-17 23:26:19.608119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:56.026 [2024-11-17 23:26:19.608133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:56.026 [2024-11-17 23:26:19.608140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:56.026 [2024-11-17 23:26:19.608148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:56.026 [2024-11-17 23:26:19.608155] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:56.026 [2024-11-17 23:26:19.608163] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:56.026 [2024-11-17 23:26:19.608171] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:56.026 [2024-11-17 23:26:19.608180] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:56.026 [2024-11-17 23:26:19.608188] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:56.026 [2024-11-17 23:26:19.608199] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:56.026 [2024-11-17 23:26:19.608207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.026 [2024-11-17 23:26:19.608215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:56.026 [2024-11-17 23:26:19.608229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.769 ms 00:20:56.026 [2024-11-17 23:26:19.608241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.026 [2024-11-17 23:26:19.621736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.026 [2024-11-17 23:26:19.621784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:56.026 [2024-11-17 23:26:19.621796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.445 ms 00:20:56.026 [2024-11-17 23:26:19.621805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.026 [2024-11-17 23:26:19.621927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.026 [2024-11-17 23:26:19.621936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:56.026 [2024-11-17 23:26:19.621945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:20:56.026 [2024-11-17 23:26:19.621953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.026 [2024-11-17 23:26:19.642140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.026 [2024-11-17 23:26:19.642198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:56.026 [2024-11-17 23:26:19.642211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.127 ms 00:20:56.026 [2024-11-17 23:26:19.642220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.026 [2024-11-17 23:26:19.642268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.026 [2024-11-17 23:26:19.642279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:56.026 [2024-11-17 23:26:19.642288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:56.026 [2024-11-17 23:26:19.642299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.026 [2024-11-17 23:26:19.642785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.026 [2024-11-17 23:26:19.642829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:56.026 [2024-11-17 23:26:19.642841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.426 ms 00:20:56.026 [2024-11-17 23:26:19.642850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.026 [2024-11-17 23:26:19.643019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.026 [2024-11-17 23:26:19.643033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:56.026 [2024-11-17 23:26:19.643043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:20:56.026 [2024-11-17 23:26:19.643056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.026 [2024-11-17 23:26:19.650794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.026 [2024-11-17 23:26:19.650844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:56.026 [2024-11-17 23:26:19.650857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.710 ms 00:20:56.026 [2024-11-17 23:26:19.650866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.026 [2024-11-17 23:26:19.654795] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:20:56.026 [2024-11-17 23:26:19.654848] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:56.026 [2024-11-17 23:26:19.654866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.026 [2024-11-17 23:26:19.654877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:56.026 [2024-11-17 23:26:19.654904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.872 ms 00:20:56.026 [2024-11-17 23:26:19.654913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.026 [2024-11-17 23:26:19.671044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.026 [2024-11-17 23:26:19.671098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:56.026 [2024-11-17 23:26:19.671111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.067 ms 00:20:56.027 [2024-11-17 23:26:19.671120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.027 [2024-11-17 23:26:19.674284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.027 [2024-11-17 23:26:19.674333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:56.027 [2024-11-17 23:26:19.674344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.110 ms 00:20:56.027 [2024-11-17 23:26:19.674351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.027 [2024-11-17 23:26:19.677101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.027 [2024-11-17 23:26:19.677146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:56.027 [2024-11-17 23:26:19.677156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.702 ms 00:20:56.027 [2024-11-17 23:26:19.677164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.027 [2024-11-17 23:26:19.677509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.027 [2024-11-17 23:26:19.677524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:56.027 [2024-11-17 23:26:19.677534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:20:56.027 [2024-11-17 23:26:19.677542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.027 [2024-11-17 23:26:19.701442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.027 [2024-11-17 23:26:19.701495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:56.027 [2024-11-17 23:26:19.701507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.875 ms 00:20:56.027 [2024-11-17 23:26:19.701516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.027 [2024-11-17 23:26:19.709475] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:56.027 [2024-11-17 23:26:19.712380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.027 [2024-11-17 23:26:19.712420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:56.027 [2024-11-17 23:26:19.712432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.816 ms 00:20:56.027 [2024-11-17 23:26:19.712448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.027 [2024-11-17 23:26:19.712519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.027 [2024-11-17 23:26:19.712530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:56.027 [2024-11-17 23:26:19.712540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:56.027 [2024-11-17 23:26:19.712553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.027 [2024-11-17 23:26:19.714229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.027 [2024-11-17 23:26:19.714277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:56.027 [2024-11-17 23:26:19.714294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.639 ms 00:20:56.027 [2024-11-17 23:26:19.714302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.027 [2024-11-17 23:26:19.714338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.027 [2024-11-17 23:26:19.714347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:56.027 [2024-11-17 23:26:19.714356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:56.027 [2024-11-17 23:26:19.714364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.027 [2024-11-17 23:26:19.714402] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:56.027 [2024-11-17 23:26:19.714417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.027 [2024-11-17 23:26:19.714426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:56.027 [2024-11-17 23:26:19.714437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:56.027 [2024-11-17 23:26:19.714446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.027 [2024-11-17 23:26:19.719670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.027 [2024-11-17 23:26:19.719714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:56.027 [2024-11-17 23:26:19.719731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.200 ms 00:20:56.027 [2024-11-17 23:26:19.719743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.027 [2024-11-17 23:26:19.719826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.027 [2024-11-17 23:26:19.719841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:56.027 [2024-11-17 23:26:19.719850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:56.027 [2024-11-17 23:26:19.719861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.027 [2024-11-17 23:26:19.720977] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 131.042 ms, result 0 00:20:57.416  [2024-11-17T23:26:22.181Z] Copying: 8568/1048576 [kB] (8568 kBps) [2024-11-17T23:26:23.123Z] Copying: 21/1024 [MB] (13 MBps) [2024-11-17T23:26:24.067Z] Copying: 33/1024 [MB] (11 MBps) [2024-11-17T23:26:25.011Z] Copying: 52/1024 [MB] (19 MBps) [2024-11-17T23:26:25.955Z] Copying: 66/1024 [MB] (14 MBps) [2024-11-17T23:26:27.343Z] Copying: 77/1024 [MB] (10 MBps) [2024-11-17T23:26:27.920Z] Copying: 92/1024 [MB] (15 MBps) [2024-11-17T23:26:29.315Z] Copying: 111/1024 [MB] (18 MBps) [2024-11-17T23:26:30.256Z] Copying: 124/1024 [MB] (13 MBps) [2024-11-17T23:26:31.200Z] Copying: 135/1024 [MB] (10 MBps) [2024-11-17T23:26:32.145Z] Copying: 150/1024 [MB] (14 MBps) [2024-11-17T23:26:33.091Z] Copying: 160/1024 [MB] (10 MBps) [2024-11-17T23:26:34.085Z] Copying: 170/1024 [MB] (10 MBps) [2024-11-17T23:26:35.029Z] Copying: 182/1024 [MB] (11 MBps) [2024-11-17T23:26:35.973Z] Copying: 202/1024 [MB] (20 MBps) [2024-11-17T23:26:36.919Z] Copying: 222/1024 [MB] (19 MBps) [2024-11-17T23:26:38.313Z] Copying: 240/1024 [MB] (18 MBps) [2024-11-17T23:26:39.256Z] Copying: 254/1024 [MB] (13 MBps) [2024-11-17T23:26:40.198Z] Copying: 264/1024 [MB] (10 MBps) [2024-11-17T23:26:41.144Z] Copying: 275/1024 [MB] (10 MBps) [2024-11-17T23:26:42.088Z] Copying: 286/1024 [MB] (11 MBps) [2024-11-17T23:26:43.032Z] Copying: 299/1024 [MB] (12 MBps) [2024-11-17T23:26:43.978Z] Copying: 317/1024 [MB] (18 MBps) [2024-11-17T23:26:44.924Z] Copying: 328/1024 [MB] (10 MBps) [2024-11-17T23:26:46.330Z] Copying: 345/1024 [MB] (16 MBps) [2024-11-17T23:26:47.275Z] Copying: 364/1024 [MB] (19 MBps) [2024-11-17T23:26:48.218Z] Copying: 380/1024 [MB] (15 MBps) [2024-11-17T23:26:49.163Z] Copying: 406/1024 [MB] (26 MBps) [2024-11-17T23:26:50.106Z] Copying: 425/1024 [MB] (19 MBps) [2024-11-17T23:26:51.050Z] Copying: 444/1024 [MB] (18 MBps) [2024-11-17T23:26:51.994Z] Copying: 465/1024 [MB] (21 MBps) [2024-11-17T23:26:52.937Z] Copying: 486/1024 [MB] (20 MBps) [2024-11-17T23:26:54.324Z] Copying: 500/1024 [MB] (14 MBps) [2024-11-17T23:26:55.268Z] Copying: 517/1024 [MB] (16 MBps) [2024-11-17T23:26:56.213Z] Copying: 533/1024 [MB] (15 MBps) [2024-11-17T23:26:57.158Z] Copying: 547/1024 [MB] (14 MBps) [2024-11-17T23:26:58.102Z] Copying: 568/1024 [MB] (20 MBps) [2024-11-17T23:26:59.046Z] Copying: 579/1024 [MB] (11 MBps) [2024-11-17T23:26:59.989Z] Copying: 590/1024 [MB] (10 MBps) [2024-11-17T23:27:00.930Z] Copying: 601/1024 [MB] (10 MBps) [2024-11-17T23:27:02.316Z] Copying: 611/1024 [MB] (10 MBps) [2024-11-17T23:27:02.943Z] Copying: 635/1024 [MB] (24 MBps) [2024-11-17T23:27:04.325Z] Copying: 647/1024 [MB] (11 MBps) [2024-11-17T23:27:05.268Z] Copying: 662/1024 [MB] (15 MBps) [2024-11-17T23:27:06.212Z] Copying: 684/1024 [MB] (22 MBps) [2024-11-17T23:27:07.155Z] Copying: 701/1024 [MB] (16 MBps) [2024-11-17T23:27:08.100Z] Copying: 717/1024 [MB] (16 MBps) [2024-11-17T23:27:09.043Z] Copying: 735/1024 [MB] (17 MBps) [2024-11-17T23:27:09.985Z] Copying: 751/1024 [MB] (16 MBps) [2024-11-17T23:27:10.931Z] Copying: 770/1024 [MB] (18 MBps) [2024-11-17T23:27:12.317Z] Copying: 787/1024 [MB] (16 MBps) [2024-11-17T23:27:13.262Z] Copying: 807/1024 [MB] (20 MBps) [2024-11-17T23:27:14.207Z] Copying: 828/1024 [MB] (20 MBps) [2024-11-17T23:27:15.148Z] Copying: 848/1024 [MB] (20 MBps) [2024-11-17T23:27:16.092Z] Copying: 864/1024 [MB] (15 MBps) [2024-11-17T23:27:17.036Z] Copying: 884/1024 [MB] (20 MBps) [2024-11-17T23:27:17.978Z] Copying: 903/1024 [MB] (18 MBps) [2024-11-17T23:27:18.925Z] Copying: 915/1024 [MB] (12 MBps) [2024-11-17T23:27:20.314Z] Copying: 928/1024 [MB] (12 MBps) [2024-11-17T23:27:21.269Z] Copying: 944/1024 [MB] (15 MBps) [2024-11-17T23:27:22.212Z] Copying: 956/1024 [MB] (12 MBps) [2024-11-17T23:27:23.154Z] Copying: 983/1024 [MB] (26 MBps) [2024-11-17T23:27:24.102Z] Copying: 1008/1024 [MB] (24 MBps) [2024-11-17T23:27:24.366Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-17 23:27:24.164923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.545 [2024-11-17 23:27:24.165012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:00.545 [2024-11-17 23:27:24.165028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:00.545 [2024-11-17 23:27:24.165038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.545 [2024-11-17 23:27:24.165065] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:00.545 [2024-11-17 23:27:24.165832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.545 [2024-11-17 23:27:24.165857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:00.545 [2024-11-17 23:27:24.165867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.749 ms 00:22:00.545 [2024-11-17 23:27:24.165903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.545 [2024-11-17 23:27:24.166874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.545 [2024-11-17 23:27:24.166912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:00.545 [2024-11-17 23:27:24.166923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.945 ms 00:22:00.545 [2024-11-17 23:27:24.166932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.545 [2024-11-17 23:27:24.173809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.545 [2024-11-17 23:27:24.173864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:00.545 [2024-11-17 23:27:24.173876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.858 ms 00:22:00.545 [2024-11-17 23:27:24.173906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.545 [2024-11-17 23:27:24.180188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.545 [2024-11-17 23:27:24.180268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:00.545 [2024-11-17 23:27:24.180280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.231 ms 00:22:00.545 [2024-11-17 23:27:24.180289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.545 [2024-11-17 23:27:24.183753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.545 [2024-11-17 23:27:24.183804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:00.545 [2024-11-17 23:27:24.183815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.392 ms 00:22:00.545 [2024-11-17 23:27:24.183823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.545 [2024-11-17 23:27:24.188519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.545 [2024-11-17 23:27:24.188571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:00.545 [2024-11-17 23:27:24.188583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.653 ms 00:22:00.545 [2024-11-17 23:27:24.188599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.808 [2024-11-17 23:27:24.370159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.808 [2024-11-17 23:27:24.370229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:00.808 [2024-11-17 23:27:24.370243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 181.513 ms 00:22:00.808 [2024-11-17 23:27:24.370262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.808 [2024-11-17 23:27:24.372707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.808 [2024-11-17 23:27:24.372756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:00.808 [2024-11-17 23:27:24.372766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.427 ms 00:22:00.808 [2024-11-17 23:27:24.372775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.808 [2024-11-17 23:27:24.374914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.808 [2024-11-17 23:27:24.374955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:00.808 [2024-11-17 23:27:24.374966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.097 ms 00:22:00.808 [2024-11-17 23:27:24.374974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.808 [2024-11-17 23:27:24.376534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.808 [2024-11-17 23:27:24.376581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:00.808 [2024-11-17 23:27:24.376591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.520 ms 00:22:00.808 [2024-11-17 23:27:24.376599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.808 [2024-11-17 23:27:24.378082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.808 [2024-11-17 23:27:24.378123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:00.808 [2024-11-17 23:27:24.378133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.417 ms 00:22:00.808 [2024-11-17 23:27:24.378140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.808 [2024-11-17 23:27:24.378177] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:00.808 [2024-11-17 23:27:24.378193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131840 / 261120 wr_cnt: 1 state: open 00:22:00.808 [2024-11-17 23:27:24.378205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:00.808 [2024-11-17 23:27:24.378590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.378999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.379006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:00.809 [2024-11-17 23:27:24.379023] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:00.809 [2024-11-17 23:27:24.379033] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e1a16e31-7428-4c79-b749-82ad6e64cef8 00:22:00.809 [2024-11-17 23:27:24.379041] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131840 00:22:00.809 [2024-11-17 23:27:24.379049] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 33984 00:22:00.809 [2024-11-17 23:27:24.379069] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 33024 00:22:00.809 [2024-11-17 23:27:24.379078] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0291 00:22:00.809 [2024-11-17 23:27:24.379086] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:00.809 [2024-11-17 23:27:24.379095] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:00.809 [2024-11-17 23:27:24.379103] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:00.809 [2024-11-17 23:27:24.379110] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:00.809 [2024-11-17 23:27:24.379116] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:00.809 [2024-11-17 23:27:24.379124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.809 [2024-11-17 23:27:24.379132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:00.809 [2024-11-17 23:27:24.379141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.948 ms 00:22:00.809 [2024-11-17 23:27:24.379149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.809 [2024-11-17 23:27:24.381420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.809 [2024-11-17 23:27:24.381456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:00.809 [2024-11-17 23:27:24.381472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.253 ms 00:22:00.810 [2024-11-17 23:27:24.381480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.810 [2024-11-17 23:27:24.381605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.810 [2024-11-17 23:27:24.381615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:00.810 [2024-11-17 23:27:24.381624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:22:00.810 [2024-11-17 23:27:24.381632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.810 [2024-11-17 23:27:24.389006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.810 [2024-11-17 23:27:24.389049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:00.810 [2024-11-17 23:27:24.389059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.810 [2024-11-17 23:27:24.389068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.810 [2024-11-17 23:27:24.389127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.810 [2024-11-17 23:27:24.389136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:00.810 [2024-11-17 23:27:24.389144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.810 [2024-11-17 23:27:24.389152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.810 [2024-11-17 23:27:24.389200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.810 [2024-11-17 23:27:24.389211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:00.810 [2024-11-17 23:27:24.389220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.810 [2024-11-17 23:27:24.389228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.810 [2024-11-17 23:27:24.389243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.810 [2024-11-17 23:27:24.389251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:00.810 [2024-11-17 23:27:24.389259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.810 [2024-11-17 23:27:24.389266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.810 [2024-11-17 23:27:24.402831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.810 [2024-11-17 23:27:24.402902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:00.810 [2024-11-17 23:27:24.402915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.810 [2024-11-17 23:27:24.402923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.810 [2024-11-17 23:27:24.413933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.810 [2024-11-17 23:27:24.413982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:00.810 [2024-11-17 23:27:24.413993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.810 [2024-11-17 23:27:24.414002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.810 [2024-11-17 23:27:24.414065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.810 [2024-11-17 23:27:24.414075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:00.810 [2024-11-17 23:27:24.414084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.810 [2024-11-17 23:27:24.414093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.810 [2024-11-17 23:27:24.414129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.810 [2024-11-17 23:27:24.414140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:00.810 [2024-11-17 23:27:24.414154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.810 [2024-11-17 23:27:24.414162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.810 [2024-11-17 23:27:24.414235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.810 [2024-11-17 23:27:24.414252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:00.810 [2024-11-17 23:27:24.414260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.810 [2024-11-17 23:27:24.414268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.810 [2024-11-17 23:27:24.414304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.810 [2024-11-17 23:27:24.414313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:00.810 [2024-11-17 23:27:24.414321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.810 [2024-11-17 23:27:24.414330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.810 [2024-11-17 23:27:24.414371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.810 [2024-11-17 23:27:24.414388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:00.810 [2024-11-17 23:27:24.414396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.810 [2024-11-17 23:27:24.414404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.810 [2024-11-17 23:27:24.414450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.810 [2024-11-17 23:27:24.414460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:00.810 [2024-11-17 23:27:24.414473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.810 [2024-11-17 23:27:24.414482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.810 [2024-11-17 23:27:24.414615] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 249.676 ms, result 0 00:22:00.810 00:22:00.810 00:22:01.071 23:27:24 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:03.619 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:03.619 23:27:26 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:03.619 23:27:26 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:22:03.619 23:27:26 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:03.619 23:27:26 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:03.619 23:27:26 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:03.619 23:27:26 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 85558 00:22:03.619 23:27:26 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 85558 ']' 00:22:03.619 23:27:26 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 85558 00:22:03.619 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (85558) - No such process 00:22:03.619 Process with pid 85558 is not found 00:22:03.619 23:27:26 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 85558 is not found' 00:22:03.619 23:27:26 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:22:03.619 Remove shared memory files 00:22:03.619 23:27:26 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:03.619 23:27:26 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:22:03.619 23:27:26 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:22:03.619 23:27:27 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:22:03.619 23:27:27 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:03.619 23:27:27 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:22:03.619 ************************************ 00:22:03.619 END TEST ftl_restore 00:22:03.619 ************************************ 00:22:03.619 00:22:03.619 real 4m19.424s 00:22:03.619 user 4m7.001s 00:22:03.619 sys 0m12.380s 00:22:03.619 23:27:27 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:03.619 23:27:27 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:03.619 23:27:27 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:03.619 23:27:27 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:03.619 23:27:27 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:03.619 23:27:27 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:03.619 ************************************ 00:22:03.619 START TEST ftl_dirty_shutdown 00:22:03.619 ************************************ 00:22:03.619 23:27:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:03.620 * Looking for test storage... 00:22:03.620 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:03.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:03.620 --rc genhtml_branch_coverage=1 00:22:03.620 --rc genhtml_function_coverage=1 00:22:03.620 --rc genhtml_legend=1 00:22:03.620 --rc geninfo_all_blocks=1 00:22:03.620 --rc geninfo_unexecuted_blocks=1 00:22:03.620 00:22:03.620 ' 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:03.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:03.620 --rc genhtml_branch_coverage=1 00:22:03.620 --rc genhtml_function_coverage=1 00:22:03.620 --rc genhtml_legend=1 00:22:03.620 --rc geninfo_all_blocks=1 00:22:03.620 --rc geninfo_unexecuted_blocks=1 00:22:03.620 00:22:03.620 ' 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:03.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:03.620 --rc genhtml_branch_coverage=1 00:22:03.620 --rc genhtml_function_coverage=1 00:22:03.620 --rc genhtml_legend=1 00:22:03.620 --rc geninfo_all_blocks=1 00:22:03.620 --rc geninfo_unexecuted_blocks=1 00:22:03.620 00:22:03.620 ' 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:03.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:03.620 --rc genhtml_branch_coverage=1 00:22:03.620 --rc genhtml_function_coverage=1 00:22:03.620 --rc genhtml_legend=1 00:22:03.620 --rc geninfo_all_blocks=1 00:22:03.620 --rc geninfo_unexecuted_blocks=1 00:22:03.620 00:22:03.620 ' 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:03.620 23:27:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:22:03.621 23:27:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:22:03.621 23:27:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:22:03.621 23:27:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:22:03.621 23:27:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:22:03.621 23:27:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:22:03.621 23:27:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:03.621 23:27:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=88333 00:22:03.621 23:27:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 88333 00:22:03.621 23:27:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 88333 ']' 00:22:03.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.621 23:27:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.621 23:27:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:03.621 23:27:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.621 23:27:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:03.621 23:27:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:03.621 23:27:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:03.621 [2024-11-17 23:27:27.337803] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:22:03.621 [2024-11-17 23:27:27.337963] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88333 ] 00:22:03.880 [2024-11-17 23:27:27.485989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.880 [2024-11-17 23:27:27.514346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:04.451 23:27:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:04.451 23:27:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:22:04.451 23:27:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:04.451 23:27:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:22:04.451 23:27:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:04.451 23:27:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:22:04.451 23:27:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:22:04.451 23:27:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:04.713 23:27:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:04.713 23:27:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:22:04.713 23:27:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:04.713 23:27:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:22:04.713 23:27:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:04.713 23:27:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:04.713 23:27:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:04.713 23:27:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:04.975 23:27:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:04.975 { 00:22:04.975 "name": "nvme0n1", 00:22:04.975 "aliases": [ 00:22:04.975 "dce7e5a5-a7ad-48fa-8ecc-9c8e0c7a186d" 00:22:04.975 ], 00:22:04.975 "product_name": "NVMe disk", 00:22:04.975 "block_size": 4096, 00:22:04.975 "num_blocks": 1310720, 00:22:04.975 "uuid": "dce7e5a5-a7ad-48fa-8ecc-9c8e0c7a186d", 00:22:04.975 "numa_id": -1, 00:22:04.975 "assigned_rate_limits": { 00:22:04.975 "rw_ios_per_sec": 0, 00:22:04.975 "rw_mbytes_per_sec": 0, 00:22:04.975 "r_mbytes_per_sec": 0, 00:22:04.975 "w_mbytes_per_sec": 0 00:22:04.975 }, 00:22:04.975 "claimed": true, 00:22:04.975 "claim_type": "read_many_write_one", 00:22:04.975 "zoned": false, 00:22:04.975 "supported_io_types": { 00:22:04.975 "read": true, 00:22:04.975 "write": true, 00:22:04.975 "unmap": true, 00:22:04.975 "flush": true, 00:22:04.975 "reset": true, 00:22:04.975 "nvme_admin": true, 00:22:04.975 "nvme_io": true, 00:22:04.975 "nvme_io_md": false, 00:22:04.975 "write_zeroes": true, 00:22:04.975 "zcopy": false, 00:22:04.975 "get_zone_info": false, 00:22:04.975 "zone_management": false, 00:22:04.975 "zone_append": false, 00:22:04.975 "compare": true, 00:22:04.975 "compare_and_write": false, 00:22:04.975 "abort": true, 00:22:04.975 "seek_hole": false, 00:22:04.975 "seek_data": false, 00:22:04.975 "copy": true, 00:22:04.975 "nvme_iov_md": false 00:22:04.975 }, 00:22:04.975 "driver_specific": { 00:22:04.975 "nvme": [ 00:22:04.975 { 00:22:04.975 "pci_address": "0000:00:11.0", 00:22:04.975 "trid": { 00:22:04.975 "trtype": "PCIe", 00:22:04.975 "traddr": "0000:00:11.0" 00:22:04.975 }, 00:22:04.975 "ctrlr_data": { 00:22:04.975 "cntlid": 0, 00:22:04.975 "vendor_id": "0x1b36", 00:22:04.975 "model_number": "QEMU NVMe Ctrl", 00:22:04.975 "serial_number": "12341", 00:22:04.975 "firmware_revision": "8.0.0", 00:22:04.975 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:04.975 "oacs": { 00:22:04.975 "security": 0, 00:22:04.975 "format": 1, 00:22:04.975 "firmware": 0, 00:22:04.975 "ns_manage": 1 00:22:04.975 }, 00:22:04.975 "multi_ctrlr": false, 00:22:04.975 "ana_reporting": false 00:22:04.975 }, 00:22:04.975 "vs": { 00:22:04.975 "nvme_version": "1.4" 00:22:04.975 }, 00:22:04.975 "ns_data": { 00:22:04.975 "id": 1, 00:22:04.975 "can_share": false 00:22:04.975 } 00:22:04.975 } 00:22:04.975 ], 00:22:04.975 "mp_policy": "active_passive" 00:22:04.975 } 00:22:04.975 } 00:22:04.975 ]' 00:22:04.975 23:27:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:04.975 23:27:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:04.975 23:27:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:04.975 23:27:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:22:04.975 23:27:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:22:04.975 23:27:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:22:04.975 23:27:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:22:04.975 23:27:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:04.975 23:27:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:22:04.975 23:27:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:04.975 23:27:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:05.237 23:27:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=0c188621-f352-4b44-8886-176e8a506b51 00:22:05.237 23:27:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:22:05.237 23:27:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0c188621-f352-4b44-8886-176e8a506b51 00:22:05.498 23:27:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:05.760 23:27:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=931d8d75-da7f-4eac-a085-bf93a7345cc9 00:22:05.760 23:27:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 931d8d75-da7f-4eac-a085-bf93a7345cc9 00:22:06.020 23:27:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=d0f3a989-d6ff-4e4d-adf0-cd2aec9125e2 00:22:06.020 23:27:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:22:06.020 23:27:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 d0f3a989-d6ff-4e4d-adf0-cd2aec9125e2 00:22:06.020 23:27:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:22:06.020 23:27:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:06.020 23:27:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=d0f3a989-d6ff-4e4d-adf0-cd2aec9125e2 00:22:06.020 23:27:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:22:06.021 23:27:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size d0f3a989-d6ff-4e4d-adf0-cd2aec9125e2 00:22:06.021 23:27:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=d0f3a989-d6ff-4e4d-adf0-cd2aec9125e2 00:22:06.021 23:27:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:06.021 23:27:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:06.021 23:27:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:06.021 23:27:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d0f3a989-d6ff-4e4d-adf0-cd2aec9125e2 00:22:06.282 23:27:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:06.282 { 00:22:06.282 "name": "d0f3a989-d6ff-4e4d-adf0-cd2aec9125e2", 00:22:06.282 "aliases": [ 00:22:06.282 "lvs/nvme0n1p0" 00:22:06.282 ], 00:22:06.282 "product_name": "Logical Volume", 00:22:06.282 "block_size": 4096, 00:22:06.282 "num_blocks": 26476544, 00:22:06.282 "uuid": "d0f3a989-d6ff-4e4d-adf0-cd2aec9125e2", 00:22:06.282 "assigned_rate_limits": { 00:22:06.282 "rw_ios_per_sec": 0, 00:22:06.282 "rw_mbytes_per_sec": 0, 00:22:06.282 "r_mbytes_per_sec": 0, 00:22:06.282 "w_mbytes_per_sec": 0 00:22:06.282 }, 00:22:06.282 "claimed": false, 00:22:06.282 "zoned": false, 00:22:06.282 "supported_io_types": { 00:22:06.282 "read": true, 00:22:06.282 "write": true, 00:22:06.282 "unmap": true, 00:22:06.282 "flush": false, 00:22:06.283 "reset": true, 00:22:06.283 "nvme_admin": false, 00:22:06.283 "nvme_io": false, 00:22:06.283 "nvme_io_md": false, 00:22:06.283 "write_zeroes": true, 00:22:06.283 "zcopy": false, 00:22:06.283 "get_zone_info": false, 00:22:06.283 "zone_management": false, 00:22:06.283 "zone_append": false, 00:22:06.283 "compare": false, 00:22:06.283 "compare_and_write": false, 00:22:06.283 "abort": false, 00:22:06.283 "seek_hole": true, 00:22:06.283 "seek_data": true, 00:22:06.283 "copy": false, 00:22:06.283 "nvme_iov_md": false 00:22:06.283 }, 00:22:06.283 "driver_specific": { 00:22:06.283 "lvol": { 00:22:06.283 "lvol_store_uuid": "931d8d75-da7f-4eac-a085-bf93a7345cc9", 00:22:06.283 "base_bdev": "nvme0n1", 00:22:06.283 "thin_provision": true, 00:22:06.283 "num_allocated_clusters": 0, 00:22:06.283 "snapshot": false, 00:22:06.283 "clone": false, 00:22:06.283 "esnap_clone": false 00:22:06.283 } 00:22:06.283 } 00:22:06.283 } 00:22:06.283 ]' 00:22:06.283 23:27:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:06.283 23:27:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:06.283 23:27:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:06.283 23:27:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:06.283 23:27:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:06.283 23:27:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:22:06.283 23:27:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:22:06.283 23:27:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:22:06.283 23:27:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:06.598 23:27:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:06.598 23:27:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:06.598 23:27:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size d0f3a989-d6ff-4e4d-adf0-cd2aec9125e2 00:22:06.598 23:27:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=d0f3a989-d6ff-4e4d-adf0-cd2aec9125e2 00:22:06.598 23:27:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:06.598 23:27:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:06.598 23:27:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:06.598 23:27:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d0f3a989-d6ff-4e4d-adf0-cd2aec9125e2 00:22:06.885 23:27:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:06.885 { 00:22:06.885 "name": "d0f3a989-d6ff-4e4d-adf0-cd2aec9125e2", 00:22:06.885 "aliases": [ 00:22:06.885 "lvs/nvme0n1p0" 00:22:06.885 ], 00:22:06.885 "product_name": "Logical Volume", 00:22:06.885 "block_size": 4096, 00:22:06.885 "num_blocks": 26476544, 00:22:06.885 "uuid": "d0f3a989-d6ff-4e4d-adf0-cd2aec9125e2", 00:22:06.885 "assigned_rate_limits": { 00:22:06.885 "rw_ios_per_sec": 0, 00:22:06.885 "rw_mbytes_per_sec": 0, 00:22:06.885 "r_mbytes_per_sec": 0, 00:22:06.885 "w_mbytes_per_sec": 0 00:22:06.885 }, 00:22:06.885 "claimed": false, 00:22:06.885 "zoned": false, 00:22:06.885 "supported_io_types": { 00:22:06.885 "read": true, 00:22:06.885 "write": true, 00:22:06.885 "unmap": true, 00:22:06.885 "flush": false, 00:22:06.885 "reset": true, 00:22:06.885 "nvme_admin": false, 00:22:06.885 "nvme_io": false, 00:22:06.885 "nvme_io_md": false, 00:22:06.885 "write_zeroes": true, 00:22:06.885 "zcopy": false, 00:22:06.885 "get_zone_info": false, 00:22:06.885 "zone_management": false, 00:22:06.885 "zone_append": false, 00:22:06.885 "compare": false, 00:22:06.885 "compare_and_write": false, 00:22:06.885 "abort": false, 00:22:06.885 "seek_hole": true, 00:22:06.885 "seek_data": true, 00:22:06.885 "copy": false, 00:22:06.885 "nvme_iov_md": false 00:22:06.885 }, 00:22:06.885 "driver_specific": { 00:22:06.885 "lvol": { 00:22:06.885 "lvol_store_uuid": "931d8d75-da7f-4eac-a085-bf93a7345cc9", 00:22:06.885 "base_bdev": "nvme0n1", 00:22:06.885 "thin_provision": true, 00:22:06.885 "num_allocated_clusters": 0, 00:22:06.885 "snapshot": false, 00:22:06.885 "clone": false, 00:22:06.885 "esnap_clone": false 00:22:06.885 } 00:22:06.885 } 00:22:06.885 } 00:22:06.885 ]' 00:22:06.885 23:27:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:06.885 23:27:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:06.885 23:27:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:06.885 23:27:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:06.885 23:27:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:06.885 23:27:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:22:06.885 23:27:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:22:06.885 23:27:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:07.146 23:27:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:22:07.146 23:27:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size d0f3a989-d6ff-4e4d-adf0-cd2aec9125e2 00:22:07.146 23:27:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=d0f3a989-d6ff-4e4d-adf0-cd2aec9125e2 00:22:07.146 23:27:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:07.146 23:27:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:07.146 23:27:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:07.146 23:27:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d0f3a989-d6ff-4e4d-adf0-cd2aec9125e2 00:22:07.146 23:27:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:07.146 { 00:22:07.147 "name": "d0f3a989-d6ff-4e4d-adf0-cd2aec9125e2", 00:22:07.147 "aliases": [ 00:22:07.147 "lvs/nvme0n1p0" 00:22:07.147 ], 00:22:07.147 "product_name": "Logical Volume", 00:22:07.147 "block_size": 4096, 00:22:07.147 "num_blocks": 26476544, 00:22:07.147 "uuid": "d0f3a989-d6ff-4e4d-adf0-cd2aec9125e2", 00:22:07.147 "assigned_rate_limits": { 00:22:07.147 "rw_ios_per_sec": 0, 00:22:07.147 "rw_mbytes_per_sec": 0, 00:22:07.147 "r_mbytes_per_sec": 0, 00:22:07.147 "w_mbytes_per_sec": 0 00:22:07.147 }, 00:22:07.147 "claimed": false, 00:22:07.147 "zoned": false, 00:22:07.147 "supported_io_types": { 00:22:07.147 "read": true, 00:22:07.147 "write": true, 00:22:07.147 "unmap": true, 00:22:07.147 "flush": false, 00:22:07.147 "reset": true, 00:22:07.147 "nvme_admin": false, 00:22:07.147 "nvme_io": false, 00:22:07.147 "nvme_io_md": false, 00:22:07.147 "write_zeroes": true, 00:22:07.147 "zcopy": false, 00:22:07.147 "get_zone_info": false, 00:22:07.147 "zone_management": false, 00:22:07.147 "zone_append": false, 00:22:07.147 "compare": false, 00:22:07.147 "compare_and_write": false, 00:22:07.147 "abort": false, 00:22:07.147 "seek_hole": true, 00:22:07.147 "seek_data": true, 00:22:07.147 "copy": false, 00:22:07.147 "nvme_iov_md": false 00:22:07.147 }, 00:22:07.147 "driver_specific": { 00:22:07.147 "lvol": { 00:22:07.147 "lvol_store_uuid": "931d8d75-da7f-4eac-a085-bf93a7345cc9", 00:22:07.147 "base_bdev": "nvme0n1", 00:22:07.147 "thin_provision": true, 00:22:07.147 "num_allocated_clusters": 0, 00:22:07.147 "snapshot": false, 00:22:07.147 "clone": false, 00:22:07.147 "esnap_clone": false 00:22:07.147 } 00:22:07.147 } 00:22:07.147 } 00:22:07.147 ]' 00:22:07.147 23:27:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:07.147 23:27:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:07.147 23:27:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:07.407 23:27:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:07.407 23:27:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:07.408 23:27:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:22:07.408 23:27:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:22:07.408 23:27:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d d0f3a989-d6ff-4e4d-adf0-cd2aec9125e2 --l2p_dram_limit 10' 00:22:07.408 23:27:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:22:07.408 23:27:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:22:07.408 23:27:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:07.408 23:27:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d0f3a989-d6ff-4e4d-adf0-cd2aec9125e2 --l2p_dram_limit 10 -c nvc0n1p0 00:22:07.408 [2024-11-17 23:27:31.152652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.408 [2024-11-17 23:27:31.152692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:07.408 [2024-11-17 23:27:31.152703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:07.408 [2024-11-17 23:27:31.152711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.408 [2024-11-17 23:27:31.152752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.408 [2024-11-17 23:27:31.152761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:07.408 [2024-11-17 23:27:31.152769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:22:07.408 [2024-11-17 23:27:31.152778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.408 [2024-11-17 23:27:31.152793] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:07.408 [2024-11-17 23:27:31.153030] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:07.408 [2024-11-17 23:27:31.153045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.408 [2024-11-17 23:27:31.153052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:07.408 [2024-11-17 23:27:31.153059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:22:07.408 [2024-11-17 23:27:31.153066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.408 [2024-11-17 23:27:31.153090] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 760ca6d2-be61-49fd-8ade-5ebf83308e21 00:22:07.408 [2024-11-17 23:27:31.154017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.408 [2024-11-17 23:27:31.154037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:07.408 [2024-11-17 23:27:31.154048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:22:07.408 [2024-11-17 23:27:31.154053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.408 [2024-11-17 23:27:31.158776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.408 [2024-11-17 23:27:31.158801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:07.408 [2024-11-17 23:27:31.158813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.525 ms 00:22:07.408 [2024-11-17 23:27:31.158819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.408 [2024-11-17 23:27:31.158876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.408 [2024-11-17 23:27:31.158898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:07.408 [2024-11-17 23:27:31.158905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:22:07.408 [2024-11-17 23:27:31.158911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.408 [2024-11-17 23:27:31.158966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.408 [2024-11-17 23:27:31.158973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:07.408 [2024-11-17 23:27:31.158981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:07.408 [2024-11-17 23:27:31.158987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.408 [2024-11-17 23:27:31.159005] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:07.408 [2024-11-17 23:27:31.160259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.408 [2024-11-17 23:27:31.160283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:07.408 [2024-11-17 23:27:31.160291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.260 ms 00:22:07.408 [2024-11-17 23:27:31.160298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.408 [2024-11-17 23:27:31.160323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.408 [2024-11-17 23:27:31.160330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:07.408 [2024-11-17 23:27:31.160336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:07.408 [2024-11-17 23:27:31.160344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.408 [2024-11-17 23:27:31.160357] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:07.408 [2024-11-17 23:27:31.160467] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:07.408 [2024-11-17 23:27:31.160476] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:07.408 [2024-11-17 23:27:31.160485] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:07.408 [2024-11-17 23:27:31.160494] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:07.408 [2024-11-17 23:27:31.160505] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:07.408 [2024-11-17 23:27:31.160517] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:07.408 [2024-11-17 23:27:31.160524] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:07.408 [2024-11-17 23:27:31.160531] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:07.408 [2024-11-17 23:27:31.160538] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:07.408 [2024-11-17 23:27:31.160546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.408 [2024-11-17 23:27:31.160554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:07.408 [2024-11-17 23:27:31.160559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:22:07.408 [2024-11-17 23:27:31.160566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.408 [2024-11-17 23:27:31.160629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.408 [2024-11-17 23:27:31.160638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:07.408 [2024-11-17 23:27:31.160644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:22:07.408 [2024-11-17 23:27:31.160650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.408 [2024-11-17 23:27:31.160724] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:07.408 [2024-11-17 23:27:31.160732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:07.408 [2024-11-17 23:27:31.160739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:07.408 [2024-11-17 23:27:31.160746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:07.408 [2024-11-17 23:27:31.160752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:07.408 [2024-11-17 23:27:31.160758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:07.408 [2024-11-17 23:27:31.160763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:07.408 [2024-11-17 23:27:31.160770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:07.408 [2024-11-17 23:27:31.160775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:07.408 [2024-11-17 23:27:31.160782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:07.408 [2024-11-17 23:27:31.160787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:07.408 [2024-11-17 23:27:31.160794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:07.408 [2024-11-17 23:27:31.160799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:07.408 [2024-11-17 23:27:31.160807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:07.408 [2024-11-17 23:27:31.160812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:07.408 [2024-11-17 23:27:31.160818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:07.408 [2024-11-17 23:27:31.160823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:07.408 [2024-11-17 23:27:31.160830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:07.408 [2024-11-17 23:27:31.160835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:07.408 [2024-11-17 23:27:31.160841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:07.408 [2024-11-17 23:27:31.160847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:07.408 [2024-11-17 23:27:31.160854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:07.408 [2024-11-17 23:27:31.160859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:07.408 [2024-11-17 23:27:31.160865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:07.408 [2024-11-17 23:27:31.160870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:07.408 [2024-11-17 23:27:31.160876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:07.408 [2024-11-17 23:27:31.160890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:07.408 [2024-11-17 23:27:31.160896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:07.408 [2024-11-17 23:27:31.160901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:07.408 [2024-11-17 23:27:31.160909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:07.408 [2024-11-17 23:27:31.160915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:07.408 [2024-11-17 23:27:31.160923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:07.408 [2024-11-17 23:27:31.160928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:07.408 [2024-11-17 23:27:31.160935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:07.408 [2024-11-17 23:27:31.160941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:07.408 [2024-11-17 23:27:31.160949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:07.408 [2024-11-17 23:27:31.160955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:07.408 [2024-11-17 23:27:31.160962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:07.408 [2024-11-17 23:27:31.160968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:07.408 [2024-11-17 23:27:31.160975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:07.408 [2024-11-17 23:27:31.160981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:07.408 [2024-11-17 23:27:31.160987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:07.408 [2024-11-17 23:27:31.160993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:07.408 [2024-11-17 23:27:31.161000] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:07.408 [2024-11-17 23:27:31.161007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:07.408 [2024-11-17 23:27:31.161016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:07.408 [2024-11-17 23:27:31.161026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:07.408 [2024-11-17 23:27:31.161034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:07.408 [2024-11-17 23:27:31.161040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:07.408 [2024-11-17 23:27:31.161047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:07.408 [2024-11-17 23:27:31.161053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:07.408 [2024-11-17 23:27:31.161059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:07.408 [2024-11-17 23:27:31.161067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:07.408 [2024-11-17 23:27:31.161077] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:07.408 [2024-11-17 23:27:31.161087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:07.408 [2024-11-17 23:27:31.161096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:07.408 [2024-11-17 23:27:31.161102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:07.408 [2024-11-17 23:27:31.161111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:07.408 [2024-11-17 23:27:31.161118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:07.408 [2024-11-17 23:27:31.161125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:07.408 [2024-11-17 23:27:31.161131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:07.408 [2024-11-17 23:27:31.161140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:07.408 [2024-11-17 23:27:31.161146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:07.408 [2024-11-17 23:27:31.161154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:07.408 [2024-11-17 23:27:31.161160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:07.408 [2024-11-17 23:27:31.161167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:07.408 [2024-11-17 23:27:31.161173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:07.408 [2024-11-17 23:27:31.161181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:07.408 [2024-11-17 23:27:31.161187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:07.408 [2024-11-17 23:27:31.161195] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:07.408 [2024-11-17 23:27:31.161201] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:07.408 [2024-11-17 23:27:31.161209] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:07.408 [2024-11-17 23:27:31.161216] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:07.408 [2024-11-17 23:27:31.161223] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:07.408 [2024-11-17 23:27:31.161230] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:07.408 [2024-11-17 23:27:31.161238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.408 [2024-11-17 23:27:31.161244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:07.408 [2024-11-17 23:27:31.161254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:22:07.408 [2024-11-17 23:27:31.161260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.408 [2024-11-17 23:27:31.161290] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:07.408 [2024-11-17 23:27:31.161297] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:11.610 [2024-11-17 23:27:34.698419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.610 [2024-11-17 23:27:34.698821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:11.610 [2024-11-17 23:27:34.699219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3537.094 ms 00:22:11.610 [2024-11-17 23:27:34.699255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.610 [2024-11-17 23:27:34.718941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.610 [2024-11-17 23:27:34.719181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:11.610 [2024-11-17 23:27:34.719539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.525 ms 00:22:11.610 [2024-11-17 23:27:34.719570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.610 [2024-11-17 23:27:34.719734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.610 [2024-11-17 23:27:34.719973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:11.610 [2024-11-17 23:27:34.720004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:22:11.610 [2024-11-17 23:27:34.720030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.610 [2024-11-17 23:27:34.737856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.610 [2024-11-17 23:27:34.738097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:11.610 [2024-11-17 23:27:34.738167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.721 ms 00:22:11.610 [2024-11-17 23:27:34.738193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.610 [2024-11-17 23:27:34.738257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.610 [2024-11-17 23:27:34.738281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:11.610 [2024-11-17 23:27:34.738307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:11.610 [2024-11-17 23:27:34.738327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.610 [2024-11-17 23:27:34.739140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.610 [2024-11-17 23:27:34.739320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:11.610 [2024-11-17 23:27:34.739393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.712 ms 00:22:11.610 [2024-11-17 23:27:34.739419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.610 [2024-11-17 23:27:34.739580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.610 [2024-11-17 23:27:34.739680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:11.610 [2024-11-17 23:27:34.739710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:22:11.610 [2024-11-17 23:27:34.739736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.610 [2024-11-17 23:27:34.752154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.610 [2024-11-17 23:27:34.752368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:11.610 [2024-11-17 23:27:34.752633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.086 ms 00:22:11.610 [2024-11-17 23:27:34.752648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.610 [2024-11-17 23:27:34.764359] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:11.610 [2024-11-17 23:27:34.769611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.610 [2024-11-17 23:27:34.769785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:11.610 [2024-11-17 23:27:34.769844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.853 ms 00:22:11.610 [2024-11-17 23:27:34.769871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.610 [2024-11-17 23:27:34.871714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.610 [2024-11-17 23:27:34.871965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:11.610 [2024-11-17 23:27:34.872051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.770 ms 00:22:11.610 [2024-11-17 23:27:34.872092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.610 [2024-11-17 23:27:34.872591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.610 [2024-11-17 23:27:34.872704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:11.610 [2024-11-17 23:27:34.872843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:22:11.610 [2024-11-17 23:27:34.872873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.610 [2024-11-17 23:27:34.879611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.610 [2024-11-17 23:27:34.879802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:11.610 [2024-11-17 23:27:34.879871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.640 ms 00:22:11.610 [2024-11-17 23:27:34.879924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.610 [2024-11-17 23:27:34.885497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.610 [2024-11-17 23:27:34.885670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:11.610 [2024-11-17 23:27:34.885740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.517 ms 00:22:11.610 [2024-11-17 23:27:34.885763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.610 [2024-11-17 23:27:34.886168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.610 [2024-11-17 23:27:34.886227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:11.610 [2024-11-17 23:27:34.886251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.353 ms 00:22:11.610 [2024-11-17 23:27:34.886359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.610 [2024-11-17 23:27:34.938046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.610 [2024-11-17 23:27:34.938231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:11.610 [2024-11-17 23:27:34.938300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.622 ms 00:22:11.610 [2024-11-17 23:27:34.938332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.610 [2024-11-17 23:27:34.946831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.610 [2024-11-17 23:27:34.947039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:11.610 [2024-11-17 23:27:34.947109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.396 ms 00:22:11.610 [2024-11-17 23:27:34.947138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.610 [2024-11-17 23:27:34.953294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.610 [2024-11-17 23:27:34.953469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:11.610 [2024-11-17 23:27:34.953528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.103 ms 00:22:11.611 [2024-11-17 23:27:34.953554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.611 [2024-11-17 23:27:34.960522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.611 [2024-11-17 23:27:34.960702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:11.611 [2024-11-17 23:27:34.960761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.833 ms 00:22:11.611 [2024-11-17 23:27:34.960790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.611 [2024-11-17 23:27:34.961078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.611 [2024-11-17 23:27:34.961112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:11.611 [2024-11-17 23:27:34.961136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:11.611 [2024-11-17 23:27:34.961158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.611 [2024-11-17 23:27:34.961352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.611 [2024-11-17 23:27:34.961389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:11.611 [2024-11-17 23:27:34.961415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:22:11.611 [2024-11-17 23:27:34.961440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.611 [2024-11-17 23:27:34.962849] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3809.588 ms, result 0 00:22:11.611 { 00:22:11.611 "name": "ftl0", 00:22:11.611 "uuid": "760ca6d2-be61-49fd-8ade-5ebf83308e21" 00:22:11.611 } 00:22:11.611 23:27:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:22:11.611 23:27:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:11.611 23:27:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:22:11.611 23:27:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:22:11.611 23:27:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:22:11.611 /dev/nbd0 00:22:11.871 23:27:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:22:11.871 23:27:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:11.872 23:27:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:22:11.872 23:27:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:11.872 23:27:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:11.872 23:27:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:11.872 23:27:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:22:11.872 23:27:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:11.872 23:27:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:11.872 23:27:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:22:11.872 1+0 records in 00:22:11.872 1+0 records out 00:22:11.872 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000606178 s, 6.8 MB/s 00:22:11.872 23:27:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:11.872 23:27:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:22:11.872 23:27:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:11.872 23:27:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:11.872 23:27:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:22:11.872 23:27:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:22:11.872 [2024-11-17 23:27:35.513285] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:22:11.872 [2024-11-17 23:27:35.513428] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88476 ] 00:22:11.872 [2024-11-17 23:27:35.651731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.872 [2024-11-17 23:27:35.680296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:13.257  [2024-11-17T23:27:38.022Z] Copying: 189/1024 [MB] (189 MBps) [2024-11-17T23:27:38.966Z] Copying: 378/1024 [MB] (189 MBps) [2024-11-17T23:27:39.908Z] Copying: 640/1024 [MB] (261 MBps) [2024-11-17T23:27:40.479Z] Copying: 899/1024 [MB] (259 MBps) [2024-11-17T23:27:40.479Z] Copying: 1024/1024 [MB] (average 228 MBps) 00:22:16.658 00:22:16.658 23:27:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:18.569 23:27:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:22:18.830 [2024-11-17 23:27:42.399343] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:22:18.830 [2024-11-17 23:27:42.399452] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88549 ] 00:22:18.830 [2024-11-17 23:27:42.538598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.830 [2024-11-17 23:27:42.554737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.210  [2024-11-17T23:27:44.602Z] Copying: 29/1024 [MB] (29 MBps) [2024-11-17T23:27:45.987Z] Copying: 58/1024 [MB] (28 MBps) [2024-11-17T23:27:46.931Z] Copying: 87/1024 [MB] (29 MBps) [2024-11-17T23:27:47.874Z] Copying: 118/1024 [MB] (31 MBps) [2024-11-17T23:27:48.824Z] Copying: 138/1024 [MB] (19 MBps) [2024-11-17T23:27:49.777Z] Copying: 161/1024 [MB] (22 MBps) [2024-11-17T23:27:50.730Z] Copying: 184/1024 [MB] (23 MBps) [2024-11-17T23:27:51.672Z] Copying: 207/1024 [MB] (23 MBps) [2024-11-17T23:27:52.612Z] Copying: 232/1024 [MB] (24 MBps) [2024-11-17T23:27:53.998Z] Copying: 261/1024 [MB] (29 MBps) [2024-11-17T23:27:54.941Z] Copying: 290/1024 [MB] (29 MBps) [2024-11-17T23:27:55.882Z] Copying: 319/1024 [MB] (28 MBps) [2024-11-17T23:27:56.817Z] Copying: 350/1024 [MB] (30 MBps) [2024-11-17T23:27:57.758Z] Copying: 384/1024 [MB] (34 MBps) [2024-11-17T23:27:58.693Z] Copying: 412/1024 [MB] (28 MBps) [2024-11-17T23:27:59.636Z] Copying: 449/1024 [MB] (36 MBps) [2024-11-17T23:28:01.057Z] Copying: 471/1024 [MB] (22 MBps) [2024-11-17T23:28:01.646Z] Copying: 492840/1048576 [kB] (9824 kBps) [2024-11-17T23:28:03.034Z] Copying: 491/1024 [MB] (10 MBps) [2024-11-17T23:28:03.601Z] Copying: 512/1024 [MB] (20 MBps) [2024-11-17T23:28:04.980Z] Copying: 549/1024 [MB] (37 MBps) [2024-11-17T23:28:05.922Z] Copying: 586/1024 [MB] (36 MBps) [2024-11-17T23:28:06.861Z] Copying: 616/1024 [MB] (30 MBps) [2024-11-17T23:28:07.800Z] Copying: 651/1024 [MB] (34 MBps) [2024-11-17T23:28:08.739Z] Copying: 688/1024 [MB] (37 MBps) [2024-11-17T23:28:09.701Z] Copying: 722/1024 [MB] (33 MBps) [2024-11-17T23:28:10.638Z] Copying: 754/1024 [MB] (31 MBps) [2024-11-17T23:28:12.015Z] Copying: 788/1024 [MB] (34 MBps) [2024-11-17T23:28:12.949Z] Copying: 823/1024 [MB] (35 MBps) [2024-11-17T23:28:13.889Z] Copying: 861/1024 [MB] (37 MBps) [2024-11-17T23:28:14.824Z] Copying: 897/1024 [MB] (35 MBps) [2024-11-17T23:28:15.761Z] Copying: 931/1024 [MB] (34 MBps) [2024-11-17T23:28:16.701Z] Copying: 968/1024 [MB] (37 MBps) [2024-11-17T23:28:17.269Z] Copying: 1003/1024 [MB] (34 MBps) [2024-11-17T23:28:17.526Z] Copying: 1024/1024 [MB] (average 29 MBps) 00:22:53.705 00:22:53.705 23:28:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:22:53.705 23:28:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:22:53.705 23:28:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:53.964 [2024-11-17 23:28:17.691747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.964 [2024-11-17 23:28:17.691793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:53.964 [2024-11-17 23:28:17.691807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:53.964 [2024-11-17 23:28:17.691814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.964 [2024-11-17 23:28:17.691834] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:53.964 [2024-11-17 23:28:17.692397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.964 [2024-11-17 23:28:17.692428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:53.964 [2024-11-17 23:28:17.692437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:22:53.964 [2024-11-17 23:28:17.692444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.964 [2024-11-17 23:28:17.694316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.964 [2024-11-17 23:28:17.694346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:53.964 [2024-11-17 23:28:17.694354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.854 ms 00:22:53.964 [2024-11-17 23:28:17.694361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.964 [2024-11-17 23:28:17.713513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.964 [2024-11-17 23:28:17.713544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:53.964 [2024-11-17 23:28:17.713553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.137 ms 00:22:53.964 [2024-11-17 23:28:17.713563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.964 [2024-11-17 23:28:17.718352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.964 [2024-11-17 23:28:17.718378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:53.964 [2024-11-17 23:28:17.718387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.760 ms 00:22:53.964 [2024-11-17 23:28:17.718395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.964 [2024-11-17 23:28:17.720267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.964 [2024-11-17 23:28:17.720298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:53.964 [2024-11-17 23:28:17.720306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.815 ms 00:22:53.964 [2024-11-17 23:28:17.720313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.964 [2024-11-17 23:28:17.725483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.964 [2024-11-17 23:28:17.725516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:53.964 [2024-11-17 23:28:17.725524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.136 ms 00:22:53.964 [2024-11-17 23:28:17.725533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.964 [2024-11-17 23:28:17.725629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.964 [2024-11-17 23:28:17.725639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:53.964 [2024-11-17 23:28:17.725646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:22:53.964 [2024-11-17 23:28:17.725657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.964 [2024-11-17 23:28:17.728265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.964 [2024-11-17 23:28:17.728300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:53.964 [2024-11-17 23:28:17.728307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.594 ms 00:22:53.964 [2024-11-17 23:28:17.728314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.964 [2024-11-17 23:28:17.730903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.964 [2024-11-17 23:28:17.730933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:53.964 [2024-11-17 23:28:17.730940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.548 ms 00:22:53.964 [2024-11-17 23:28:17.730947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.964 [2024-11-17 23:28:17.732189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.964 [2024-11-17 23:28:17.732218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:53.964 [2024-11-17 23:28:17.732225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.215 ms 00:22:53.964 [2024-11-17 23:28:17.732232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.964 [2024-11-17 23:28:17.733499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.964 [2024-11-17 23:28:17.733533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:53.964 [2024-11-17 23:28:17.733540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.220 ms 00:22:53.964 [2024-11-17 23:28:17.733548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.964 [2024-11-17 23:28:17.733573] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:53.964 [2024-11-17 23:28:17.733588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.733996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.734002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.734010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.734015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.734034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.734042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.734050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.734055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.734062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.734068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.734076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.734082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.734090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.734096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.734103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.734109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.734116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.734122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.734129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.734135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.734143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.734148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.734156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.734162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.734168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.734174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.734182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.734187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.734197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:53.965 [2024-11-17 23:28:17.734203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:53.966 [2024-11-17 23:28:17.734210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:53.966 [2024-11-17 23:28:17.734216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:53.966 [2024-11-17 23:28:17.734222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:53.966 [2024-11-17 23:28:17.734228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:53.966 [2024-11-17 23:28:17.734235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:53.966 [2024-11-17 23:28:17.734241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:53.966 [2024-11-17 23:28:17.734248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:53.966 [2024-11-17 23:28:17.734255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:53.966 [2024-11-17 23:28:17.734262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:53.966 [2024-11-17 23:28:17.734268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:53.966 [2024-11-17 23:28:17.734275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:53.966 [2024-11-17 23:28:17.734281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:53.966 [2024-11-17 23:28:17.734288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:53.966 [2024-11-17 23:28:17.734293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:53.966 [2024-11-17 23:28:17.734308] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:53.966 [2024-11-17 23:28:17.734316] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 760ca6d2-be61-49fd-8ade-5ebf83308e21 00:22:53.966 [2024-11-17 23:28:17.734325] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:53.966 [2024-11-17 23:28:17.734330] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:53.966 [2024-11-17 23:28:17.734338] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:53.966 [2024-11-17 23:28:17.734344] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:53.966 [2024-11-17 23:28:17.734351] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:53.966 [2024-11-17 23:28:17.734357] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:53.966 [2024-11-17 23:28:17.734365] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:53.966 [2024-11-17 23:28:17.734369] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:53.966 [2024-11-17 23:28:17.734376] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:53.966 [2024-11-17 23:28:17.734382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.966 [2024-11-17 23:28:17.734393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:53.966 [2024-11-17 23:28:17.734400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.810 ms 00:22:53.966 [2024-11-17 23:28:17.734409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.966 [2024-11-17 23:28:17.736184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.966 [2024-11-17 23:28:17.736211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:53.966 [2024-11-17 23:28:17.736218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.762 ms 00:22:53.966 [2024-11-17 23:28:17.736227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.966 [2024-11-17 23:28:17.736313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.966 [2024-11-17 23:28:17.736322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:53.966 [2024-11-17 23:28:17.736331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:22:53.966 [2024-11-17 23:28:17.736349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.966 [2024-11-17 23:28:17.742385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.966 [2024-11-17 23:28:17.742412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:53.966 [2024-11-17 23:28:17.742422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.966 [2024-11-17 23:28:17.742430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.966 [2024-11-17 23:28:17.742479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.966 [2024-11-17 23:28:17.742488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:53.966 [2024-11-17 23:28:17.742498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.966 [2024-11-17 23:28:17.742509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.966 [2024-11-17 23:28:17.742560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.966 [2024-11-17 23:28:17.742572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:53.966 [2024-11-17 23:28:17.742579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.966 [2024-11-17 23:28:17.742586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.966 [2024-11-17 23:28:17.742600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.966 [2024-11-17 23:28:17.742608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:53.966 [2024-11-17 23:28:17.742614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.966 [2024-11-17 23:28:17.742622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.966 [2024-11-17 23:28:17.753751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.966 [2024-11-17 23:28:17.753790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:53.966 [2024-11-17 23:28:17.753798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.966 [2024-11-17 23:28:17.753806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.966 [2024-11-17 23:28:17.762897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.966 [2024-11-17 23:28:17.762933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:53.966 [2024-11-17 23:28:17.762942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.966 [2024-11-17 23:28:17.762952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.966 [2024-11-17 23:28:17.763060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.966 [2024-11-17 23:28:17.763075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:53.966 [2024-11-17 23:28:17.763082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.966 [2024-11-17 23:28:17.763097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.966 [2024-11-17 23:28:17.763127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.966 [2024-11-17 23:28:17.763137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:53.966 [2024-11-17 23:28:17.763143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.966 [2024-11-17 23:28:17.763151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.966 [2024-11-17 23:28:17.763214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.966 [2024-11-17 23:28:17.763225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:53.966 [2024-11-17 23:28:17.763232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.966 [2024-11-17 23:28:17.763240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.966 [2024-11-17 23:28:17.763265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.966 [2024-11-17 23:28:17.763275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:53.966 [2024-11-17 23:28:17.763281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.966 [2024-11-17 23:28:17.763289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.966 [2024-11-17 23:28:17.763327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.966 [2024-11-17 23:28:17.763339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:53.966 [2024-11-17 23:28:17.763345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.966 [2024-11-17 23:28:17.763353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.966 [2024-11-17 23:28:17.763394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.966 [2024-11-17 23:28:17.763405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:53.966 [2024-11-17 23:28:17.763411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.966 [2024-11-17 23:28:17.763420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.966 [2024-11-17 23:28:17.763547] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 71.760 ms, result 0 00:22:53.966 true 00:22:54.225 23:28:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 88333 00:22:54.225 23:28:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid88333 00:22:54.225 23:28:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:22:54.225 [2024-11-17 23:28:17.846476] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:22:54.225 [2024-11-17 23:28:17.846590] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88926 ] 00:22:54.225 [2024-11-17 23:28:17.989661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.225 [2024-11-17 23:28:18.017416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.605  [2024-11-17T23:28:20.361Z] Copying: 255/1024 [MB] (255 MBps) [2024-11-17T23:28:21.300Z] Copying: 508/1024 [MB] (252 MBps) [2024-11-17T23:28:22.237Z] Copying: 763/1024 [MB] (255 MBps) [2024-11-17T23:28:22.237Z] Copying: 1009/1024 [MB] (246 MBps) [2024-11-17T23:28:22.495Z] Copying: 1024/1024 [MB] (average 252 MBps) 00:22:58.674 00:22:58.674 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 88333 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:22:58.674 23:28:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:58.674 [2024-11-17 23:28:22.363141] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:22:58.674 [2024-11-17 23:28:22.363300] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88979 ] 00:22:58.939 [2024-11-17 23:28:22.502424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.939 [2024-11-17 23:28:22.524021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.939 [2024-11-17 23:28:22.623741] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:58.939 [2024-11-17 23:28:22.623796] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:58.939 [2024-11-17 23:28:22.686212] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:22:58.939 [2024-11-17 23:28:22.686754] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:22:58.939 [2024-11-17 23:28:22.687297] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:22:59.507 [2024-11-17 23:28:23.037930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.507 [2024-11-17 23:28:23.037966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:59.507 [2024-11-17 23:28:23.037977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:59.507 [2024-11-17 23:28:23.037984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.507 [2024-11-17 23:28:23.038022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.507 [2024-11-17 23:28:23.038035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:59.507 [2024-11-17 23:28:23.038042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:22:59.507 [2024-11-17 23:28:23.038048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.507 [2024-11-17 23:28:23.038062] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:59.507 [2024-11-17 23:28:23.038251] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:59.507 [2024-11-17 23:28:23.038262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.507 [2024-11-17 23:28:23.038268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:59.507 [2024-11-17 23:28:23.038275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:22:59.507 [2024-11-17 23:28:23.038280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.507 [2024-11-17 23:28:23.039534] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:59.507 [2024-11-17 23:28:23.042459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.507 [2024-11-17 23:28:23.042487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:59.507 [2024-11-17 23:28:23.042495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.926 ms 00:22:59.507 [2024-11-17 23:28:23.042502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.508 [2024-11-17 23:28:23.042548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.508 [2024-11-17 23:28:23.042556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:59.508 [2024-11-17 23:28:23.042563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:59.508 [2024-11-17 23:28:23.042569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.508 [2024-11-17 23:28:23.048841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.508 [2024-11-17 23:28:23.048869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:59.508 [2024-11-17 23:28:23.048886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.226 ms 00:22:59.508 [2024-11-17 23:28:23.048893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.508 [2024-11-17 23:28:23.048962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.508 [2024-11-17 23:28:23.048970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:59.508 [2024-11-17 23:28:23.048977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:22:59.508 [2024-11-17 23:28:23.048983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.508 [2024-11-17 23:28:23.049022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.508 [2024-11-17 23:28:23.049029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:59.508 [2024-11-17 23:28:23.049037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:59.508 [2024-11-17 23:28:23.049047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.508 [2024-11-17 23:28:23.049064] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:59.508 [2024-11-17 23:28:23.050598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.508 [2024-11-17 23:28:23.050625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:59.508 [2024-11-17 23:28:23.050632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.538 ms 00:22:59.508 [2024-11-17 23:28:23.050638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.508 [2024-11-17 23:28:23.050664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.508 [2024-11-17 23:28:23.050671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:59.508 [2024-11-17 23:28:23.050683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:59.508 [2024-11-17 23:28:23.050689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.508 [2024-11-17 23:28:23.050707] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:59.508 [2024-11-17 23:28:23.050723] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:59.508 [2024-11-17 23:28:23.050759] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:59.508 [2024-11-17 23:28:23.050774] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:59.508 [2024-11-17 23:28:23.050857] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:59.508 [2024-11-17 23:28:23.050869] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:59.508 [2024-11-17 23:28:23.050904] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:59.508 [2024-11-17 23:28:23.050914] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:59.508 [2024-11-17 23:28:23.050921] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:59.508 [2024-11-17 23:28:23.050929] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:59.508 [2024-11-17 23:28:23.050936] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:59.508 [2024-11-17 23:28:23.050942] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:59.508 [2024-11-17 23:28:23.050947] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:59.508 [2024-11-17 23:28:23.050955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.508 [2024-11-17 23:28:23.050964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:59.508 [2024-11-17 23:28:23.050970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:22:59.508 [2024-11-17 23:28:23.050976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.508 [2024-11-17 23:28:23.051040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.508 [2024-11-17 23:28:23.051050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:59.508 [2024-11-17 23:28:23.051056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:22:59.508 [2024-11-17 23:28:23.051066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.508 [2024-11-17 23:28:23.051148] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:59.508 [2024-11-17 23:28:23.051162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:59.508 [2024-11-17 23:28:23.051170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:59.508 [2024-11-17 23:28:23.051181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:59.508 [2024-11-17 23:28:23.051187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:59.508 [2024-11-17 23:28:23.051193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:59.508 [2024-11-17 23:28:23.051198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:59.508 [2024-11-17 23:28:23.051203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:59.508 [2024-11-17 23:28:23.051209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:59.508 [2024-11-17 23:28:23.051214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:59.508 [2024-11-17 23:28:23.051218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:59.508 [2024-11-17 23:28:23.051223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:59.508 [2024-11-17 23:28:23.051228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:59.508 [2024-11-17 23:28:23.051237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:59.508 [2024-11-17 23:28:23.051243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:59.508 [2024-11-17 23:28:23.051249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:59.508 [2024-11-17 23:28:23.051255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:59.508 [2024-11-17 23:28:23.051261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:59.508 [2024-11-17 23:28:23.051267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:59.508 [2024-11-17 23:28:23.051277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:59.508 [2024-11-17 23:28:23.051284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:59.508 [2024-11-17 23:28:23.051290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:59.508 [2024-11-17 23:28:23.051295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:59.508 [2024-11-17 23:28:23.051302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:59.508 [2024-11-17 23:28:23.051307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:59.508 [2024-11-17 23:28:23.051312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:59.508 [2024-11-17 23:28:23.051318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:59.508 [2024-11-17 23:28:23.051324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:59.508 [2024-11-17 23:28:23.051329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:59.508 [2024-11-17 23:28:23.051336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:59.508 [2024-11-17 23:28:23.051341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:59.508 [2024-11-17 23:28:23.051347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:59.508 [2024-11-17 23:28:23.051352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:59.508 [2024-11-17 23:28:23.051358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:59.508 [2024-11-17 23:28:23.051363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:59.508 [2024-11-17 23:28:23.051371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:59.508 [2024-11-17 23:28:23.051377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:59.508 [2024-11-17 23:28:23.051384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:59.508 [2024-11-17 23:28:23.051390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:59.508 [2024-11-17 23:28:23.051395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:59.508 [2024-11-17 23:28:23.051401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:59.508 [2024-11-17 23:28:23.051407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:59.508 [2024-11-17 23:28:23.051415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:59.508 [2024-11-17 23:28:23.051421] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:59.508 [2024-11-17 23:28:23.051427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:59.508 [2024-11-17 23:28:23.051435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:59.508 [2024-11-17 23:28:23.051442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:59.508 [2024-11-17 23:28:23.051449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:59.508 [2024-11-17 23:28:23.051455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:59.508 [2024-11-17 23:28:23.051461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:59.508 [2024-11-17 23:28:23.051467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:59.508 [2024-11-17 23:28:23.051475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:59.508 [2024-11-17 23:28:23.051481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:59.508 [2024-11-17 23:28:23.051488] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:59.508 [2024-11-17 23:28:23.051496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:59.508 [2024-11-17 23:28:23.051506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:59.508 [2024-11-17 23:28:23.051512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:59.509 [2024-11-17 23:28:23.051518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:59.509 [2024-11-17 23:28:23.051524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:59.509 [2024-11-17 23:28:23.051530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:59.509 [2024-11-17 23:28:23.051537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:59.509 [2024-11-17 23:28:23.051544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:59.509 [2024-11-17 23:28:23.051554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:59.509 [2024-11-17 23:28:23.051560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:59.509 [2024-11-17 23:28:23.051566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:59.509 [2024-11-17 23:28:23.051573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:59.509 [2024-11-17 23:28:23.051579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:59.509 [2024-11-17 23:28:23.051587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:59.509 [2024-11-17 23:28:23.051594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:59.509 [2024-11-17 23:28:23.051601] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:59.509 [2024-11-17 23:28:23.051608] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:59.509 [2024-11-17 23:28:23.051617] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:59.509 [2024-11-17 23:28:23.051623] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:59.509 [2024-11-17 23:28:23.051629] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:59.509 [2024-11-17 23:28:23.051634] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:59.509 [2024-11-17 23:28:23.051641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.509 [2024-11-17 23:28:23.051647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:59.509 [2024-11-17 23:28:23.051654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:22:59.509 [2024-11-17 23:28:23.051659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.509 [2024-11-17 23:28:23.062917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.509 [2024-11-17 23:28:23.062941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:59.509 [2024-11-17 23:28:23.062949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.225 ms 00:22:59.509 [2024-11-17 23:28:23.062956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.509 [2024-11-17 23:28:23.063019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.509 [2024-11-17 23:28:23.063028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:59.509 [2024-11-17 23:28:23.063037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:22:59.509 [2024-11-17 23:28:23.063043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.509 [2024-11-17 23:28:23.081621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.509 [2024-11-17 23:28:23.081670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:59.509 [2024-11-17 23:28:23.081685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.535 ms 00:22:59.509 [2024-11-17 23:28:23.081695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.509 [2024-11-17 23:28:23.081741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.509 [2024-11-17 23:28:23.081756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:59.509 [2024-11-17 23:28:23.081768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:59.509 [2024-11-17 23:28:23.081777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.509 [2024-11-17 23:28:23.082271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.509 [2024-11-17 23:28:23.082303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:59.509 [2024-11-17 23:28:23.082317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.430 ms 00:22:59.509 [2024-11-17 23:28:23.082328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.509 [2024-11-17 23:28:23.082504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.509 [2024-11-17 23:28:23.082517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:59.509 [2024-11-17 23:28:23.082535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:22:59.509 [2024-11-17 23:28:23.082548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.509 [2024-11-17 23:28:23.089678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.509 [2024-11-17 23:28:23.089705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:59.509 [2024-11-17 23:28:23.089713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.105 ms 00:22:59.509 [2024-11-17 23:28:23.089725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.509 [2024-11-17 23:28:23.092554] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:59.509 [2024-11-17 23:28:23.092583] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:59.509 [2024-11-17 23:28:23.092592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.509 [2024-11-17 23:28:23.092600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:59.509 [2024-11-17 23:28:23.092606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.794 ms 00:22:59.509 [2024-11-17 23:28:23.092612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.509 [2024-11-17 23:28:23.103947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.509 [2024-11-17 23:28:23.103974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:59.509 [2024-11-17 23:28:23.103988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.303 ms 00:22:59.509 [2024-11-17 23:28:23.103995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.509 [2024-11-17 23:28:23.105595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.509 [2024-11-17 23:28:23.105621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:59.509 [2024-11-17 23:28:23.105632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.571 ms 00:22:59.509 [2024-11-17 23:28:23.105638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.509 [2024-11-17 23:28:23.106972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.509 [2024-11-17 23:28:23.106997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:59.509 [2024-11-17 23:28:23.107004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.307 ms 00:22:59.509 [2024-11-17 23:28:23.107010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.509 [2024-11-17 23:28:23.107252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.509 [2024-11-17 23:28:23.107271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:59.509 [2024-11-17 23:28:23.107278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:22:59.509 [2024-11-17 23:28:23.107284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.509 [2024-11-17 23:28:23.125412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.509 [2024-11-17 23:28:23.125442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:59.509 [2024-11-17 23:28:23.125455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.115 ms 00:22:59.509 [2024-11-17 23:28:23.125462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.509 [2024-11-17 23:28:23.131669] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:59.509 [2024-11-17 23:28:23.134070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.509 [2024-11-17 23:28:23.134095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:59.509 [2024-11-17 23:28:23.134104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.577 ms 00:22:59.509 [2024-11-17 23:28:23.134115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.509 [2024-11-17 23:28:23.134159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.509 [2024-11-17 23:28:23.134167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:59.509 [2024-11-17 23:28:23.134175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:59.509 [2024-11-17 23:28:23.134183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.509 [2024-11-17 23:28:23.134255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.509 [2024-11-17 23:28:23.134264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:59.509 [2024-11-17 23:28:23.134273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:22:59.509 [2024-11-17 23:28:23.134280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.509 [2024-11-17 23:28:23.134299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.509 [2024-11-17 23:28:23.134306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:59.509 [2024-11-17 23:28:23.134313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:59.509 [2024-11-17 23:28:23.134319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.509 [2024-11-17 23:28:23.134349] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:59.509 [2024-11-17 23:28:23.134357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.509 [2024-11-17 23:28:23.134363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:59.509 [2024-11-17 23:28:23.134370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:59.509 [2024-11-17 23:28:23.134376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.509 [2024-11-17 23:28:23.138135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.509 [2024-11-17 23:28:23.138162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:59.509 [2024-11-17 23:28:23.138171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.743 ms 00:22:59.509 [2024-11-17 23:28:23.138177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.509 [2024-11-17 23:28:23.138239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.509 [2024-11-17 23:28:23.138246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:59.510 [2024-11-17 23:28:23.138253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:22:59.510 [2024-11-17 23:28:23.138259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.510 [2024-11-17 23:28:23.139191] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 100.883 ms, result 0 00:23:00.465  [2024-11-17T23:28:25.222Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-17T23:28:26.156Z] Copying: 30/1024 [MB] (11 MBps) [2024-11-17T23:28:27.531Z] Copying: 42/1024 [MB] (11 MBps) [2024-11-17T23:28:28.465Z] Copying: 53/1024 [MB] (11 MBps) [2024-11-17T23:28:29.458Z] Copying: 64/1024 [MB] (11 MBps) [2024-11-17T23:28:30.418Z] Copying: 76/1024 [MB] (11 MBps) [2024-11-17T23:28:31.351Z] Copying: 87/1024 [MB] (11 MBps) [2024-11-17T23:28:32.285Z] Copying: 98/1024 [MB] (11 MBps) [2024-11-17T23:28:33.226Z] Copying: 110/1024 [MB] (11 MBps) [2024-11-17T23:28:34.161Z] Copying: 121/1024 [MB] (11 MBps) [2024-11-17T23:28:35.548Z] Copying: 132/1024 [MB] (11 MBps) [2024-11-17T23:28:36.484Z] Copying: 143/1024 [MB] (10 MBps) [2024-11-17T23:28:37.419Z] Copying: 154/1024 [MB] (11 MBps) [2024-11-17T23:28:38.351Z] Copying: 166/1024 [MB] (11 MBps) [2024-11-17T23:28:39.286Z] Copying: 177/1024 [MB] (11 MBps) [2024-11-17T23:28:40.221Z] Copying: 188/1024 [MB] (11 MBps) [2024-11-17T23:28:41.172Z] Copying: 200/1024 [MB] (11 MBps) [2024-11-17T23:28:42.553Z] Copying: 210/1024 [MB] (10 MBps) [2024-11-17T23:28:43.492Z] Copying: 225792/1048576 [kB] (10216 kBps) [2024-11-17T23:28:44.428Z] Copying: 231/1024 [MB] (10 MBps) [2024-11-17T23:28:45.364Z] Copying: 242/1024 [MB] (11 MBps) [2024-11-17T23:28:46.302Z] Copying: 254/1024 [MB] (11 MBps) [2024-11-17T23:28:47.242Z] Copying: 264/1024 [MB] (10 MBps) [2024-11-17T23:28:48.184Z] Copying: 276/1024 [MB] (12 MBps) [2024-11-17T23:28:49.557Z] Copying: 289/1024 [MB] (12 MBps) [2024-11-17T23:28:50.490Z] Copying: 300/1024 [MB] (11 MBps) [2024-11-17T23:28:51.421Z] Copying: 311/1024 [MB] (11 MBps) [2024-11-17T23:28:52.354Z] Copying: 322/1024 [MB] (11 MBps) [2024-11-17T23:28:53.287Z] Copying: 334/1024 [MB] (11 MBps) [2024-11-17T23:28:54.221Z] Copying: 345/1024 [MB] (11 MBps) [2024-11-17T23:28:55.156Z] Copying: 356/1024 [MB] (11 MBps) [2024-11-17T23:28:56.530Z] Copying: 367/1024 [MB] (11 MBps) [2024-11-17T23:28:57.470Z] Copying: 379/1024 [MB] (11 MBps) [2024-11-17T23:28:58.478Z] Copying: 390/1024 [MB] (11 MBps) [2024-11-17T23:28:59.436Z] Copying: 401/1024 [MB] (11 MBps) [2024-11-17T23:29:00.373Z] Copying: 413/1024 [MB] (11 MBps) [2024-11-17T23:29:01.319Z] Copying: 424/1024 [MB] (11 MBps) [2024-11-17T23:29:02.252Z] Copying: 435/1024 [MB] (10 MBps) [2024-11-17T23:29:03.190Z] Copying: 446/1024 [MB] (11 MBps) [2024-11-17T23:29:04.567Z] Copying: 457/1024 [MB] (10 MBps) [2024-11-17T23:29:05.503Z] Copying: 467/1024 [MB] (10 MBps) [2024-11-17T23:29:06.439Z] Copying: 478/1024 [MB] (10 MBps) [2024-11-17T23:29:07.376Z] Copying: 489/1024 [MB] (11 MBps) [2024-11-17T23:29:08.323Z] Copying: 500/1024 [MB] (10 MBps) [2024-11-17T23:29:09.266Z] Copying: 511/1024 [MB] (11 MBps) [2024-11-17T23:29:10.207Z] Copying: 522/1024 [MB] (10 MBps) [2024-11-17T23:29:11.585Z] Copying: 533/1024 [MB] (11 MBps) [2024-11-17T23:29:12.519Z] Copying: 543/1024 [MB] (10 MBps) [2024-11-17T23:29:13.454Z] Copying: 554/1024 [MB] (10 MBps) [2024-11-17T23:29:14.391Z] Copying: 566/1024 [MB] (11 MBps) [2024-11-17T23:29:15.325Z] Copying: 577/1024 [MB] (11 MBps) [2024-11-17T23:29:16.260Z] Copying: 588/1024 [MB] (11 MBps) [2024-11-17T23:29:17.192Z] Copying: 600/1024 [MB] (11 MBps) [2024-11-17T23:29:18.565Z] Copying: 611/1024 [MB] (11 MBps) [2024-11-17T23:29:19.503Z] Copying: 622/1024 [MB] (11 MBps) [2024-11-17T23:29:20.436Z] Copying: 633/1024 [MB] (10 MBps) [2024-11-17T23:29:21.371Z] Copying: 644/1024 [MB] (10 MBps) [2024-11-17T23:29:22.306Z] Copying: 655/1024 [MB] (10 MBps) [2024-11-17T23:29:23.239Z] Copying: 666/1024 [MB] (11 MBps) [2024-11-17T23:29:24.180Z] Copying: 677/1024 [MB] (11 MBps) [2024-11-17T23:29:25.552Z] Copying: 688/1024 [MB] (10 MBps) [2024-11-17T23:29:26.485Z] Copying: 699/1024 [MB] (11 MBps) [2024-11-17T23:29:27.503Z] Copying: 710/1024 [MB] (11 MBps) [2024-11-17T23:29:28.442Z] Copying: 721/1024 [MB] (11 MBps) [2024-11-17T23:29:29.379Z] Copying: 733/1024 [MB] (11 MBps) [2024-11-17T23:29:30.323Z] Copying: 744/1024 [MB] (11 MBps) [2024-11-17T23:29:31.258Z] Copying: 772432/1048576 [kB] (10200 kBps) [2024-11-17T23:29:32.196Z] Copying: 764/1024 [MB] (10 MBps) [2024-11-17T23:29:33.573Z] Copying: 779/1024 [MB] (14 MBps) [2024-11-17T23:29:34.507Z] Copying: 792/1024 [MB] (12 MBps) [2024-11-17T23:29:35.447Z] Copying: 811/1024 [MB] (18 MBps) [2024-11-17T23:29:36.383Z] Copying: 822/1024 [MB] (11 MBps) [2024-11-17T23:29:37.325Z] Copying: 837/1024 [MB] (14 MBps) [2024-11-17T23:29:38.260Z] Copying: 851/1024 [MB] (14 MBps) [2024-11-17T23:29:39.194Z] Copying: 862/1024 [MB] (10 MBps) [2024-11-17T23:29:40.569Z] Copying: 873/1024 [MB] (11 MBps) [2024-11-17T23:29:41.503Z] Copying: 886/1024 [MB] (12 MBps) [2024-11-17T23:29:42.437Z] Copying: 899/1024 [MB] (12 MBps) [2024-11-17T23:29:43.371Z] Copying: 910/1024 [MB] (11 MBps) [2024-11-17T23:29:44.306Z] Copying: 921/1024 [MB] (11 MBps) [2024-11-17T23:29:45.252Z] Copying: 933/1024 [MB] (11 MBps) [2024-11-17T23:29:46.198Z] Copying: 944/1024 [MB] (11 MBps) [2024-11-17T23:29:47.579Z] Copying: 954/1024 [MB] (10 MBps) [2024-11-17T23:29:48.514Z] Copying: 965/1024 [MB] (10 MBps) [2024-11-17T23:29:49.449Z] Copying: 977/1024 [MB] (11 MBps) [2024-11-17T23:29:50.382Z] Copying: 988/1024 [MB] (11 MBps) [2024-11-17T23:29:51.318Z] Copying: 999/1024 [MB] (11 MBps) [2024-11-17T23:29:52.256Z] Copying: 1010/1024 [MB] (10 MBps) [2024-11-17T23:29:53.198Z] Copying: 1022/1024 [MB] (11 MBps) [2024-11-17T23:29:53.459Z] Copying: 1048512/1048576 [kB] (1928 kBps) [2024-11-17T23:29:53.459Z] Copying: 1024/1024 [MB] (average 11 MBps)[2024-11-17 23:29:53.221698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.638 [2024-11-17 23:29:53.221915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:29.639 [2024-11-17 23:29:53.221990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:29.639 [2024-11-17 23:29:53.222017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.639 [2024-11-17 23:29:53.223670] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:29.639 [2024-11-17 23:29:53.227211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.639 [2024-11-17 23:29:53.227389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:29.639 [2024-11-17 23:29:53.227508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.207 ms 00:24:29.639 [2024-11-17 23:29:53.227535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.639 [2024-11-17 23:29:53.251114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.639 [2024-11-17 23:29:53.251300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:29.639 [2024-11-17 23:29:53.251758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.489 ms 00:24:29.639 [2024-11-17 23:29:53.251810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.639 [2024-11-17 23:29:53.279161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.639 [2024-11-17 23:29:53.279346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:29.639 [2024-11-17 23:29:53.279476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.223 ms 00:24:29.639 [2024-11-17 23:29:53.279501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.639 [2024-11-17 23:29:53.285697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.639 [2024-11-17 23:29:53.285845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:29.639 [2024-11-17 23:29:53.285921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.147 ms 00:24:29.639 [2024-11-17 23:29:53.285958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.639 [2024-11-17 23:29:53.288697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.639 [2024-11-17 23:29:53.288843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:29.639 [2024-11-17 23:29:53.288910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.684 ms 00:24:29.639 [2024-11-17 23:29:53.288934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.639 [2024-11-17 23:29:53.295051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.639 [2024-11-17 23:29:53.295248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:29.639 [2024-11-17 23:29:53.295271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.842 ms 00:24:29.639 [2024-11-17 23:29:53.295281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.903 [2024-11-17 23:29:53.580778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.903 [2024-11-17 23:29:53.580846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:29.903 [2024-11-17 23:29:53.580863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 285.448 ms 00:24:29.903 [2024-11-17 23:29:53.580896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.903 [2024-11-17 23:29:53.584683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.903 [2024-11-17 23:29:53.584724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:29.903 [2024-11-17 23:29:53.584734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.768 ms 00:24:29.903 [2024-11-17 23:29:53.584743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.903 [2024-11-17 23:29:53.587630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.903 [2024-11-17 23:29:53.587668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:29.903 [2024-11-17 23:29:53.587677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.845 ms 00:24:29.903 [2024-11-17 23:29:53.587684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.903 [2024-11-17 23:29:53.590076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.903 [2024-11-17 23:29:53.590115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:29.903 [2024-11-17 23:29:53.590124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.353 ms 00:24:29.903 [2024-11-17 23:29:53.590131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.903 [2024-11-17 23:29:53.592328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.903 [2024-11-17 23:29:53.592367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:29.903 [2024-11-17 23:29:53.592376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.129 ms 00:24:29.903 [2024-11-17 23:29:53.592383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.903 [2024-11-17 23:29:53.592418] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:29.903 [2024-11-17 23:29:53.592433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 102400 / 261120 wr_cnt: 1 state: open 00:24:29.903 [2024-11-17 23:29:53.592465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:29.903 [2024-11-17 23:29:53.592473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:29.903 [2024-11-17 23:29:53.592481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:29.903 [2024-11-17 23:29:53.592489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:29.903 [2024-11-17 23:29:53.592496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:29.903 [2024-11-17 23:29:53.592503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:29.903 [2024-11-17 23:29:53.592510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:29.903 [2024-11-17 23:29:53.592517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:29.903 [2024-11-17 23:29:53.592525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:29.903 [2024-11-17 23:29:53.592532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:29.903 [2024-11-17 23:29:53.592539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:29.903 [2024-11-17 23:29:53.592546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:29.903 [2024-11-17 23:29:53.592553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:29.903 [2024-11-17 23:29:53.592560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:29.903 [2024-11-17 23:29:53.592568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:29.903 [2024-11-17 23:29:53.592575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:29.903 [2024-11-17 23:29:53.592582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:29.903 [2024-11-17 23:29:53.592589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:29.903 [2024-11-17 23:29:53.592597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:29.903 [2024-11-17 23:29:53.592605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:29.903 [2024-11-17 23:29:53.592612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:29.903 [2024-11-17 23:29:53.592619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:29.903 [2024-11-17 23:29:53.592626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:29.903 [2024-11-17 23:29:53.592634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:29.903 [2024-11-17 23:29:53.592641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:29.903 [2024-11-17 23:29:53.592650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:29.903 [2024-11-17 23:29:53.592657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:29.903 [2024-11-17 23:29:53.592665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:29.903 [2024-11-17 23:29:53.592675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:29.903 [2024-11-17 23:29:53.592684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:29.903 [2024-11-17 23:29:53.592691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:29.903 [2024-11-17 23:29:53.592699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:29.903 [2024-11-17 23:29:53.592707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:29.903 [2024-11-17 23:29:53.592714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:29.903 [2024-11-17 23:29:53.592722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:29.903 [2024-11-17 23:29:53.592729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:29.903 [2024-11-17 23:29:53.592737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:29.903 [2024-11-17 23:29:53.592745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:29.903 [2024-11-17 23:29:53.592753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:29.903 [2024-11-17 23:29:53.592769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:29.903 [2024-11-17 23:29:53.592777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:29.903 [2024-11-17 23:29:53.592785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.592793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.592800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.592807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.592814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.592821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.592829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.592836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.592844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.592851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.592859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.592866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.592874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.592900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.592909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.592917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.592927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.592934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.592941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.592952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.592960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.592968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.592975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.592983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.592991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.592999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.593006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.593014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.593021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.593028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.593035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.593043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.593050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.593058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.593065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.593072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.593079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.593087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.593095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.593103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.593111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.593119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.593127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.593134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.593142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.593149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.593157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.593164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.593171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.593179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.593186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.593194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.593202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.593210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.593218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.593225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.593233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.593240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:29.904 [2024-11-17 23:29:53.593256] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:29.904 [2024-11-17 23:29:53.593264] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 760ca6d2-be61-49fd-8ade-5ebf83308e21 00:24:29.904 [2024-11-17 23:29:53.593278] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 102400 00:24:29.904 [2024-11-17 23:29:53.593285] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 103360 00:24:29.904 [2024-11-17 23:29:53.593294] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 102400 00:24:29.904 [2024-11-17 23:29:53.593303] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0094 00:24:29.904 [2024-11-17 23:29:53.593311] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:29.904 [2024-11-17 23:29:53.593319] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:29.904 [2024-11-17 23:29:53.593326] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:29.904 [2024-11-17 23:29:53.593332] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:29.904 [2024-11-17 23:29:53.593338] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:29.904 [2024-11-17 23:29:53.593346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.904 [2024-11-17 23:29:53.593354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:29.904 [2024-11-17 23:29:53.593363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.929 ms 00:24:29.904 [2024-11-17 23:29:53.593373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.904 [2024-11-17 23:29:53.596354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.904 [2024-11-17 23:29:53.596388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:29.904 [2024-11-17 23:29:53.596408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.963 ms 00:24:29.904 [2024-11-17 23:29:53.596417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.904 [2024-11-17 23:29:53.596598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.904 [2024-11-17 23:29:53.596619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:29.904 [2024-11-17 23:29:53.596628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:24:29.904 [2024-11-17 23:29:53.596636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.904 [2024-11-17 23:29:53.606540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.904 [2024-11-17 23:29:53.606585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:29.904 [2024-11-17 23:29:53.606597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.904 [2024-11-17 23:29:53.606607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.904 [2024-11-17 23:29:53.606677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.904 [2024-11-17 23:29:53.606688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:29.904 [2024-11-17 23:29:53.606699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.904 [2024-11-17 23:29:53.606707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.904 [2024-11-17 23:29:53.606771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.904 [2024-11-17 23:29:53.606782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:29.904 [2024-11-17 23:29:53.606791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.904 [2024-11-17 23:29:53.606799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.904 [2024-11-17 23:29:53.606820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.904 [2024-11-17 23:29:53.606833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:29.904 [2024-11-17 23:29:53.606845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.905 [2024-11-17 23:29:53.606854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.905 [2024-11-17 23:29:53.625613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.905 [2024-11-17 23:29:53.625671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:29.905 [2024-11-17 23:29:53.625683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.905 [2024-11-17 23:29:53.625692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.905 [2024-11-17 23:29:53.640985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.905 [2024-11-17 23:29:53.641052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:29.905 [2024-11-17 23:29:53.641065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.905 [2024-11-17 23:29:53.641074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.905 [2024-11-17 23:29:53.641135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.905 [2024-11-17 23:29:53.641147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:29.905 [2024-11-17 23:29:53.641156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.905 [2024-11-17 23:29:53.641174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.905 [2024-11-17 23:29:53.641220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.905 [2024-11-17 23:29:53.641231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:29.905 [2024-11-17 23:29:53.641241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.905 [2024-11-17 23:29:53.641253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.905 [2024-11-17 23:29:53.641338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.905 [2024-11-17 23:29:53.641349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:29.905 [2024-11-17 23:29:53.641359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.905 [2024-11-17 23:29:53.641369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.905 [2024-11-17 23:29:53.641403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.905 [2024-11-17 23:29:53.641415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:29.905 [2024-11-17 23:29:53.641424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.905 [2024-11-17 23:29:53.641433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.905 [2024-11-17 23:29:53.641492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.905 [2024-11-17 23:29:53.641503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:29.905 [2024-11-17 23:29:53.641512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.905 [2024-11-17 23:29:53.641526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.905 [2024-11-17 23:29:53.641585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.905 [2024-11-17 23:29:53.641608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:29.905 [2024-11-17 23:29:53.641619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.905 [2024-11-17 23:29:53.641632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.905 [2024-11-17 23:29:53.641796] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 420.534 ms, result 0 00:24:30.846 00:24:30.846 00:24:30.846 23:29:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:32.833 23:29:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:32.833 [2024-11-17 23:29:56.594790] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:24:32.833 [2024-11-17 23:29:56.594905] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89951 ] 00:24:33.092 [2024-11-17 23:29:56.727113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.092 [2024-11-17 23:29:56.750725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.092 [2024-11-17 23:29:56.849131] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:33.092 [2024-11-17 23:29:56.849188] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:33.352 [2024-11-17 23:29:56.997486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.352 [2024-11-17 23:29:56.997529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:33.352 [2024-11-17 23:29:56.997540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:33.352 [2024-11-17 23:29:56.997546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.352 [2024-11-17 23:29:56.997587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.352 [2024-11-17 23:29:56.997595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:33.352 [2024-11-17 23:29:56.997604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:33.352 [2024-11-17 23:29:56.997609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.352 [2024-11-17 23:29:56.997625] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:33.352 [2024-11-17 23:29:56.997821] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:33.352 [2024-11-17 23:29:56.997833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.352 [2024-11-17 23:29:56.997839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:33.352 [2024-11-17 23:29:56.997846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:24:33.352 [2024-11-17 23:29:56.997855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.352 [2024-11-17 23:29:56.999173] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:33.352 [2024-11-17 23:29:57.001663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.352 [2024-11-17 23:29:57.001698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:33.352 [2024-11-17 23:29:57.001706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.491 ms 00:24:33.352 [2024-11-17 23:29:57.001712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.352 [2024-11-17 23:29:57.001763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.352 [2024-11-17 23:29:57.001771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:33.352 [2024-11-17 23:29:57.001778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:24:33.352 [2024-11-17 23:29:57.001784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.352 [2024-11-17 23:29:57.007868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.352 [2024-11-17 23:29:57.007902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:33.352 [2024-11-17 23:29:57.007914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.039 ms 00:24:33.352 [2024-11-17 23:29:57.007924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.352 [2024-11-17 23:29:57.007989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.352 [2024-11-17 23:29:57.007997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:33.352 [2024-11-17 23:29:57.008003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:24:33.352 [2024-11-17 23:29:57.008009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.352 [2024-11-17 23:29:57.008042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.352 [2024-11-17 23:29:57.008049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:33.352 [2024-11-17 23:29:57.008060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:33.352 [2024-11-17 23:29:57.008065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.352 [2024-11-17 23:29:57.008085] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:33.352 [2024-11-17 23:29:57.009598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.352 [2024-11-17 23:29:57.009628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:33.352 [2024-11-17 23:29:57.009638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.517 ms 00:24:33.352 [2024-11-17 23:29:57.009646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.352 [2024-11-17 23:29:57.009669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.352 [2024-11-17 23:29:57.009680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:33.352 [2024-11-17 23:29:57.009689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:33.352 [2024-11-17 23:29:57.009695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.352 [2024-11-17 23:29:57.009712] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:33.352 [2024-11-17 23:29:57.009727] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:33.352 [2024-11-17 23:29:57.009759] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:33.352 [2024-11-17 23:29:57.009773] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:33.352 [2024-11-17 23:29:57.009859] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:33.352 [2024-11-17 23:29:57.009873] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:33.352 [2024-11-17 23:29:57.009895] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:33.352 [2024-11-17 23:29:57.009906] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:33.352 [2024-11-17 23:29:57.009914] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:33.352 [2024-11-17 23:29:57.009920] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:33.352 [2024-11-17 23:29:57.009926] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:33.352 [2024-11-17 23:29:57.009933] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:33.352 [2024-11-17 23:29:57.009940] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:33.352 [2024-11-17 23:29:57.009946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.352 [2024-11-17 23:29:57.009955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:33.352 [2024-11-17 23:29:57.009961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:24:33.352 [2024-11-17 23:29:57.009966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.352 [2024-11-17 23:29:57.010029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.352 [2024-11-17 23:29:57.010044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:33.352 [2024-11-17 23:29:57.010050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:24:33.352 [2024-11-17 23:29:57.010055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.352 [2024-11-17 23:29:57.010128] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:33.352 [2024-11-17 23:29:57.010141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:33.352 [2024-11-17 23:29:57.010148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:33.352 [2024-11-17 23:29:57.010155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:33.352 [2024-11-17 23:29:57.010165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:33.352 [2024-11-17 23:29:57.010170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:33.352 [2024-11-17 23:29:57.010179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:33.352 [2024-11-17 23:29:57.010185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:33.352 [2024-11-17 23:29:57.010190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:33.352 [2024-11-17 23:29:57.010196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:33.352 [2024-11-17 23:29:57.010201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:33.353 [2024-11-17 23:29:57.010207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:33.353 [2024-11-17 23:29:57.010212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:33.353 [2024-11-17 23:29:57.010218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:33.353 [2024-11-17 23:29:57.010224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:33.353 [2024-11-17 23:29:57.010229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:33.353 [2024-11-17 23:29:57.010234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:33.353 [2024-11-17 23:29:57.010240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:33.353 [2024-11-17 23:29:57.010245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:33.353 [2024-11-17 23:29:57.010251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:33.353 [2024-11-17 23:29:57.010257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:33.353 [2024-11-17 23:29:57.010263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:33.353 [2024-11-17 23:29:57.010271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:33.353 [2024-11-17 23:29:57.010278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:33.353 [2024-11-17 23:29:57.010284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:33.353 [2024-11-17 23:29:57.010290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:33.353 [2024-11-17 23:29:57.010296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:33.353 [2024-11-17 23:29:57.010301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:33.353 [2024-11-17 23:29:57.010307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:33.353 [2024-11-17 23:29:57.010313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:33.353 [2024-11-17 23:29:57.010318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:33.353 [2024-11-17 23:29:57.010324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:33.353 [2024-11-17 23:29:57.010330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:33.353 [2024-11-17 23:29:57.010335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:33.353 [2024-11-17 23:29:57.010341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:33.353 [2024-11-17 23:29:57.010347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:33.353 [2024-11-17 23:29:57.010353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:33.353 [2024-11-17 23:29:57.010358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:33.353 [2024-11-17 23:29:57.010366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:33.353 [2024-11-17 23:29:57.010372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:33.353 [2024-11-17 23:29:57.010378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:33.353 [2024-11-17 23:29:57.010383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:33.353 [2024-11-17 23:29:57.010389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:33.353 [2024-11-17 23:29:57.010395] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:33.353 [2024-11-17 23:29:57.010404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:33.353 [2024-11-17 23:29:57.010415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:33.353 [2024-11-17 23:29:57.010422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:33.353 [2024-11-17 23:29:57.010429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:33.353 [2024-11-17 23:29:57.010435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:33.353 [2024-11-17 23:29:57.010441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:33.353 [2024-11-17 23:29:57.010447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:33.353 [2024-11-17 23:29:57.010453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:33.353 [2024-11-17 23:29:57.010459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:33.353 [2024-11-17 23:29:57.010466] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:33.353 [2024-11-17 23:29:57.010475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:33.353 [2024-11-17 23:29:57.010486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:33.353 [2024-11-17 23:29:57.010492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:33.353 [2024-11-17 23:29:57.010499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:33.353 [2024-11-17 23:29:57.010505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:33.353 [2024-11-17 23:29:57.010512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:33.353 [2024-11-17 23:29:57.010518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:33.353 [2024-11-17 23:29:57.010524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:33.353 [2024-11-17 23:29:57.010530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:33.353 [2024-11-17 23:29:57.010536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:33.353 [2024-11-17 23:29:57.010543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:33.353 [2024-11-17 23:29:57.010549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:33.353 [2024-11-17 23:29:57.010559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:33.353 [2024-11-17 23:29:57.010565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:33.353 [2024-11-17 23:29:57.010571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:33.353 [2024-11-17 23:29:57.010577] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:33.353 [2024-11-17 23:29:57.010587] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:33.353 [2024-11-17 23:29:57.010594] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:33.353 [2024-11-17 23:29:57.010601] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:33.353 [2024-11-17 23:29:57.010607] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:33.353 [2024-11-17 23:29:57.010613] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:33.353 [2024-11-17 23:29:57.010620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.353 [2024-11-17 23:29:57.010627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:33.353 [2024-11-17 23:29:57.010635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:24:33.353 [2024-11-17 23:29:57.010641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.353 [2024-11-17 23:29:57.021575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.353 [2024-11-17 23:29:57.021607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:33.353 [2024-11-17 23:29:57.021619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.900 ms 00:24:33.353 [2024-11-17 23:29:57.021632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.353 [2024-11-17 23:29:57.021706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.353 [2024-11-17 23:29:57.021713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:33.353 [2024-11-17 23:29:57.021719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:24:33.353 [2024-11-17 23:29:57.021725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.353 [2024-11-17 23:29:57.046129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.353 [2024-11-17 23:29:57.046219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:33.353 [2024-11-17 23:29:57.046250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.359 ms 00:24:33.353 [2024-11-17 23:29:57.046280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.353 [2024-11-17 23:29:57.046382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.353 [2024-11-17 23:29:57.046408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:33.353 [2024-11-17 23:29:57.046431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:33.353 [2024-11-17 23:29:57.046451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.353 [2024-11-17 23:29:57.047142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.353 [2024-11-17 23:29:57.047206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:33.353 [2024-11-17 23:29:57.047230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 00:24:33.353 [2024-11-17 23:29:57.047250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.353 [2024-11-17 23:29:57.047571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.353 [2024-11-17 23:29:57.047617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:33.353 [2024-11-17 23:29:57.047649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:24:33.353 [2024-11-17 23:29:57.047678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.353 [2024-11-17 23:29:57.054376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.353 [2024-11-17 23:29:57.054411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:33.353 [2024-11-17 23:29:57.054419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.653 ms 00:24:33.353 [2024-11-17 23:29:57.054425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.353 [2024-11-17 23:29:57.057331] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:33.353 [2024-11-17 23:29:57.057360] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:33.353 [2024-11-17 23:29:57.057370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.354 [2024-11-17 23:29:57.057378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:33.354 [2024-11-17 23:29:57.057384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.878 ms 00:24:33.354 [2024-11-17 23:29:57.057390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.354 [2024-11-17 23:29:57.068712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.354 [2024-11-17 23:29:57.068744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:33.354 [2024-11-17 23:29:57.068752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.288 ms 00:24:33.354 [2024-11-17 23:29:57.068758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.354 [2024-11-17 23:29:57.070759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.354 [2024-11-17 23:29:57.070787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:33.354 [2024-11-17 23:29:57.070793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.968 ms 00:24:33.354 [2024-11-17 23:29:57.070799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.354 [2024-11-17 23:29:57.072503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.354 [2024-11-17 23:29:57.072528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:33.354 [2024-11-17 23:29:57.072535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.679 ms 00:24:33.354 [2024-11-17 23:29:57.072541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.354 [2024-11-17 23:29:57.072785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.354 [2024-11-17 23:29:57.072801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:33.354 [2024-11-17 23:29:57.072808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:24:33.354 [2024-11-17 23:29:57.072815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.354 [2024-11-17 23:29:57.090408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.354 [2024-11-17 23:29:57.090449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:33.354 [2024-11-17 23:29:57.090462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.579 ms 00:24:33.354 [2024-11-17 23:29:57.090469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.354 [2024-11-17 23:29:57.096646] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:33.354 [2024-11-17 23:29:57.098863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.354 [2024-11-17 23:29:57.098903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:33.354 [2024-11-17 23:29:57.098915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.361 ms 00:24:33.354 [2024-11-17 23:29:57.098922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.354 [2024-11-17 23:29:57.098986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.354 [2024-11-17 23:29:57.098995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:33.354 [2024-11-17 23:29:57.099002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:33.354 [2024-11-17 23:29:57.099008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.354 [2024-11-17 23:29:57.100323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.354 [2024-11-17 23:29:57.100355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:33.354 [2024-11-17 23:29:57.100364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.285 ms 00:24:33.354 [2024-11-17 23:29:57.100373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.354 [2024-11-17 23:29:57.100398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.354 [2024-11-17 23:29:57.100407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:33.354 [2024-11-17 23:29:57.100414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:33.354 [2024-11-17 23:29:57.100419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.354 [2024-11-17 23:29:57.100454] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:33.354 [2024-11-17 23:29:57.100463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.354 [2024-11-17 23:29:57.100470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:33.354 [2024-11-17 23:29:57.100476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:24:33.354 [2024-11-17 23:29:57.100483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.354 [2024-11-17 23:29:57.103668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.354 [2024-11-17 23:29:57.103697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:33.354 [2024-11-17 23:29:57.103705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.172 ms 00:24:33.354 [2024-11-17 23:29:57.103717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.354 [2024-11-17 23:29:57.103771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.354 [2024-11-17 23:29:57.103778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:33.354 [2024-11-17 23:29:57.103789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:24:33.354 [2024-11-17 23:29:57.103795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.354 [2024-11-17 23:29:57.104647] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 106.798 ms, result 0 00:24:34.754  [2024-11-17T23:29:59.517Z] Copying: 1072/1048576 [kB] (1072 kBps) [2024-11-17T23:30:00.459Z] Copying: 4908/1048576 [kB] (3836 kBps) [2024-11-17T23:30:01.397Z] Copying: 25/1024 [MB] (20 MBps) [2024-11-17T23:30:02.331Z] Copying: 45/1024 [MB] (20 MBps) [2024-11-17T23:30:03.265Z] Copying: 70/1024 [MB] (24 MBps) [2024-11-17T23:30:04.648Z] Copying: 87/1024 [MB] (17 MBps) [2024-11-17T23:30:05.582Z] Copying: 104/1024 [MB] (16 MBps) [2024-11-17T23:30:06.518Z] Copying: 125/1024 [MB] (21 MBps) [2024-11-17T23:30:07.454Z] Copying: 144/1024 [MB] (18 MBps) [2024-11-17T23:30:08.388Z] Copying: 162/1024 [MB] (18 MBps) [2024-11-17T23:30:09.329Z] Copying: 181/1024 [MB] (18 MBps) [2024-11-17T23:30:10.269Z] Copying: 199/1024 [MB] (18 MBps) [2024-11-17T23:30:11.661Z] Copying: 216/1024 [MB] (16 MBps) [2024-11-17T23:30:12.599Z] Copying: 232/1024 [MB] (16 MBps) [2024-11-17T23:30:13.535Z] Copying: 248/1024 [MB] (16 MBps) [2024-11-17T23:30:14.487Z] Copying: 266/1024 [MB] (17 MBps) [2024-11-17T23:30:15.425Z] Copying: 283/1024 [MB] (17 MBps) [2024-11-17T23:30:16.362Z] Copying: 300/1024 [MB] (17 MBps) [2024-11-17T23:30:17.297Z] Copying: 318/1024 [MB] (17 MBps) [2024-11-17T23:30:18.672Z] Copying: 336/1024 [MB] (17 MBps) [2024-11-17T23:30:19.623Z] Copying: 354/1024 [MB] (18 MBps) [2024-11-17T23:30:20.561Z] Copying: 372/1024 [MB] (17 MBps) [2024-11-17T23:30:21.497Z] Copying: 388/1024 [MB] (16 MBps) [2024-11-17T23:30:22.434Z] Copying: 406/1024 [MB] (17 MBps) [2024-11-17T23:30:23.371Z] Copying: 424/1024 [MB] (17 MBps) [2024-11-17T23:30:24.309Z] Copying: 441/1024 [MB] (16 MBps) [2024-11-17T23:30:25.268Z] Copying: 459/1024 [MB] (18 MBps) [2024-11-17T23:30:26.669Z] Copying: 477/1024 [MB] (18 MBps) [2024-11-17T23:30:27.604Z] Copying: 495/1024 [MB] (17 MBps) [2024-11-17T23:30:28.547Z] Copying: 513/1024 [MB] (18 MBps) [2024-11-17T23:30:29.482Z] Copying: 530/1024 [MB] (16 MBps) [2024-11-17T23:30:30.422Z] Copying: 547/1024 [MB] (17 MBps) [2024-11-17T23:30:31.358Z] Copying: 564/1024 [MB] (17 MBps) [2024-11-17T23:30:32.292Z] Copying: 581/1024 [MB] (16 MBps) [2024-11-17T23:30:33.666Z] Copying: 599/1024 [MB] (17 MBps) [2024-11-17T23:30:34.608Z] Copying: 617/1024 [MB] (18 MBps) [2024-11-17T23:30:35.552Z] Copying: 635/1024 [MB] (18 MBps) [2024-11-17T23:30:36.501Z] Copying: 653/1024 [MB] (18 MBps) [2024-11-17T23:30:37.435Z] Copying: 669/1024 [MB] (15 MBps) [2024-11-17T23:30:38.380Z] Copying: 687/1024 [MB] (17 MBps) [2024-11-17T23:30:39.319Z] Copying: 704/1024 [MB] (17 MBps) [2024-11-17T23:30:40.253Z] Copying: 722/1024 [MB] (17 MBps) [2024-11-17T23:30:41.641Z] Copying: 741/1024 [MB] (18 MBps) [2024-11-17T23:30:42.584Z] Copying: 758/1024 [MB] (17 MBps) [2024-11-17T23:30:43.522Z] Copying: 774/1024 [MB] (16 MBps) [2024-11-17T23:30:44.457Z] Copying: 791/1024 [MB] (16 MBps) [2024-11-17T23:30:45.391Z] Copying: 808/1024 [MB] (17 MBps) [2024-11-17T23:30:46.333Z] Copying: 826/1024 [MB] (17 MBps) [2024-11-17T23:30:47.276Z] Copying: 843/1024 [MB] (16 MBps) [2024-11-17T23:30:48.659Z] Copying: 860/1024 [MB] (17 MBps) [2024-11-17T23:30:49.604Z] Copying: 877/1024 [MB] (16 MBps) [2024-11-17T23:30:50.545Z] Copying: 893/1024 [MB] (15 MBps) [2024-11-17T23:30:51.486Z] Copying: 908/1024 [MB] (15 MBps) [2024-11-17T23:30:52.420Z] Copying: 925/1024 [MB] (16 MBps) [2024-11-17T23:30:53.360Z] Copying: 943/1024 [MB] (17 MBps) [2024-11-17T23:30:54.354Z] Copying: 961/1024 [MB] (17 MBps) [2024-11-17T23:30:55.288Z] Copying: 979/1024 [MB] (17 MBps) [2024-11-17T23:30:56.663Z] Copying: 997/1024 [MB] (17 MBps) [2024-11-17T23:30:56.922Z] Copying: 1015/1024 [MB] (18 MBps) [2024-11-17T23:30:56.922Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-17 23:30:56.789915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.101 [2024-11-17 23:30:56.789980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:33.101 [2024-11-17 23:30:56.789997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:33.101 [2024-11-17 23:30:56.790009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.101 [2024-11-17 23:30:56.790033] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:33.101 [2024-11-17 23:30:56.790739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.101 [2024-11-17 23:30:56.790772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:33.101 [2024-11-17 23:30:56.790785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.681 ms 00:25:33.101 [2024-11-17 23:30:56.790795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.101 [2024-11-17 23:30:56.791071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.101 [2024-11-17 23:30:56.791085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:33.101 [2024-11-17 23:30:56.791096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:25:33.101 [2024-11-17 23:30:56.791106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.101 [2024-11-17 23:30:56.804267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.101 [2024-11-17 23:30:56.804302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:33.102 [2024-11-17 23:30:56.804314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.142 ms 00:25:33.102 [2024-11-17 23:30:56.804326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.102 [2024-11-17 23:30:56.810099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.102 [2024-11-17 23:30:56.810124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:33.102 [2024-11-17 23:30:56.810133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.743 ms 00:25:33.102 [2024-11-17 23:30:56.810139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.102 [2024-11-17 23:30:56.812273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.102 [2024-11-17 23:30:56.812299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:33.102 [2024-11-17 23:30:56.812307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.108 ms 00:25:33.102 [2024-11-17 23:30:56.812313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.102 [2024-11-17 23:30:56.816088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.102 [2024-11-17 23:30:56.816122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:33.102 [2024-11-17 23:30:56.816132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.752 ms 00:25:33.102 [2024-11-17 23:30:56.816138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.102 [2024-11-17 23:30:56.819555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.102 [2024-11-17 23:30:56.819581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:33.102 [2024-11-17 23:30:56.819588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.391 ms 00:25:33.102 [2024-11-17 23:30:56.819599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.102 [2024-11-17 23:30:56.822404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.102 [2024-11-17 23:30:56.822429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:33.102 [2024-11-17 23:30:56.822436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.794 ms 00:25:33.102 [2024-11-17 23:30:56.822441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.102 [2024-11-17 23:30:56.824607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.102 [2024-11-17 23:30:56.824630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:33.102 [2024-11-17 23:30:56.824638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.143 ms 00:25:33.102 [2024-11-17 23:30:56.824643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.102 [2024-11-17 23:30:56.826249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.102 [2024-11-17 23:30:56.826272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:33.102 [2024-11-17 23:30:56.826279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.583 ms 00:25:33.102 [2024-11-17 23:30:56.826284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.102 [2024-11-17 23:30:56.827928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.102 [2024-11-17 23:30:56.827952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:33.102 [2024-11-17 23:30:56.827958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.604 ms 00:25:33.102 [2024-11-17 23:30:56.827963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.102 [2024-11-17 23:30:56.827984] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:33.102 [2024-11-17 23:30:56.828001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:33.102 [2024-11-17 23:30:56.828009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:25:33.102 [2024-11-17 23:30:56.828016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:33.102 [2024-11-17 23:30:56.828346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:33.103 [2024-11-17 23:30:56.828352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:33.103 [2024-11-17 23:30:56.828358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:33.103 [2024-11-17 23:30:56.828365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:33.103 [2024-11-17 23:30:56.828371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:33.103 [2024-11-17 23:30:56.828376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:33.103 [2024-11-17 23:30:56.828381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:33.103 [2024-11-17 23:30:56.828387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:33.103 [2024-11-17 23:30:56.828393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:33.103 [2024-11-17 23:30:56.828398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:33.103 [2024-11-17 23:30:56.828404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:33.103 [2024-11-17 23:30:56.828410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:33.103 [2024-11-17 23:30:56.828416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:33.103 [2024-11-17 23:30:56.828421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:33.103 [2024-11-17 23:30:56.828427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:33.103 [2024-11-17 23:30:56.828433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:33.103 [2024-11-17 23:30:56.828438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:33.103 [2024-11-17 23:30:56.828444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:33.103 [2024-11-17 23:30:56.828451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:33.103 [2024-11-17 23:30:56.828457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:33.103 [2024-11-17 23:30:56.828462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:33.103 [2024-11-17 23:30:56.828468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:33.103 [2024-11-17 23:30:56.828474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:33.103 [2024-11-17 23:30:56.828488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:33.103 [2024-11-17 23:30:56.828494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:33.103 [2024-11-17 23:30:56.828500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:33.103 [2024-11-17 23:30:56.828506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:33.103 [2024-11-17 23:30:56.828512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:33.103 [2024-11-17 23:30:56.828518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:33.103 [2024-11-17 23:30:56.828524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:33.103 [2024-11-17 23:30:56.828530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:33.103 [2024-11-17 23:30:56.828536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:33.103 [2024-11-17 23:30:56.828542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:33.103 [2024-11-17 23:30:56.828548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:33.103 [2024-11-17 23:30:56.828556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:33.103 [2024-11-17 23:30:56.828562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:33.103 [2024-11-17 23:30:56.828568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:33.103 [2024-11-17 23:30:56.828575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:33.103 [2024-11-17 23:30:56.828581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:33.103 [2024-11-17 23:30:56.828587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:33.103 [2024-11-17 23:30:56.828593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:33.103 [2024-11-17 23:30:56.828606] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:33.103 [2024-11-17 23:30:56.828612] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 760ca6d2-be61-49fd-8ade-5ebf83308e21 00:25:33.103 [2024-11-17 23:30:56.828620] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:25:33.103 [2024-11-17 23:30:56.828631] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 162240 00:25:33.103 [2024-11-17 23:30:56.828639] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 160256 00:25:33.103 [2024-11-17 23:30:56.828645] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0124 00:25:33.103 [2024-11-17 23:30:56.828650] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:33.103 [2024-11-17 23:30:56.828656] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:33.103 [2024-11-17 23:30:56.828662] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:33.103 [2024-11-17 23:30:56.828667] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:33.103 [2024-11-17 23:30:56.828673] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:33.103 [2024-11-17 23:30:56.828678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.103 [2024-11-17 23:30:56.828684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:33.103 [2024-11-17 23:30:56.828691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.695 ms 00:25:33.103 [2024-11-17 23:30:56.828696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.103 [2024-11-17 23:30:56.830418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.103 [2024-11-17 23:30:56.830438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:33.103 [2024-11-17 23:30:56.830446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.710 ms 00:25:33.103 [2024-11-17 23:30:56.830452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.103 [2024-11-17 23:30:56.830541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.103 [2024-11-17 23:30:56.830549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:33.103 [2024-11-17 23:30:56.830556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:25:33.103 [2024-11-17 23:30:56.830565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.103 [2024-11-17 23:30:56.836109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.103 [2024-11-17 23:30:56.836135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:33.103 [2024-11-17 23:30:56.836143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.103 [2024-11-17 23:30:56.836149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.103 [2024-11-17 23:30:56.836189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.103 [2024-11-17 23:30:56.836196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:33.103 [2024-11-17 23:30:56.836202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.103 [2024-11-17 23:30:56.836211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.103 [2024-11-17 23:30:56.836258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.103 [2024-11-17 23:30:56.836266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:33.103 [2024-11-17 23:30:56.836273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.103 [2024-11-17 23:30:56.836279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.103 [2024-11-17 23:30:56.836296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.103 [2024-11-17 23:30:56.836303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:33.103 [2024-11-17 23:30:56.836309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.103 [2024-11-17 23:30:56.836315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.103 [2024-11-17 23:30:56.846756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.103 [2024-11-17 23:30:56.846788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:33.103 [2024-11-17 23:30:56.846801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.103 [2024-11-17 23:30:56.846808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.103 [2024-11-17 23:30:56.855208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.103 [2024-11-17 23:30:56.855238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:33.103 [2024-11-17 23:30:56.855247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.103 [2024-11-17 23:30:56.855258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.103 [2024-11-17 23:30:56.855301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.103 [2024-11-17 23:30:56.855308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:33.103 [2024-11-17 23:30:56.855315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.103 [2024-11-17 23:30:56.855322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.103 [2024-11-17 23:30:56.855343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.103 [2024-11-17 23:30:56.855350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:33.103 [2024-11-17 23:30:56.855357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.103 [2024-11-17 23:30:56.855367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.103 [2024-11-17 23:30:56.855429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.103 [2024-11-17 23:30:56.855439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:33.103 [2024-11-17 23:30:56.855446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.103 [2024-11-17 23:30:56.855452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.103 [2024-11-17 23:30:56.855475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.104 [2024-11-17 23:30:56.855482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:33.104 [2024-11-17 23:30:56.855489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.104 [2024-11-17 23:30:56.855497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.104 [2024-11-17 23:30:56.855531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.104 [2024-11-17 23:30:56.855539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:33.104 [2024-11-17 23:30:56.855546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.104 [2024-11-17 23:30:56.855552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.104 [2024-11-17 23:30:56.855591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.104 [2024-11-17 23:30:56.855599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:33.104 [2024-11-17 23:30:56.855605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.104 [2024-11-17 23:30:56.855612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.104 [2024-11-17 23:30:56.855720] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 65.804 ms, result 0 00:25:33.380 00:25:33.380 00:25:33.380 23:30:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:35.923 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:35.923 23:30:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:35.923 [2024-11-17 23:30:59.235919] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:25:35.923 [2024-11-17 23:30:59.236018] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90592 ] 00:25:35.923 [2024-11-17 23:30:59.374800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.923 [2024-11-17 23:30:59.400405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.923 [2024-11-17 23:30:59.499364] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:35.923 [2024-11-17 23:30:59.499423] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:35.923 [2024-11-17 23:30:59.651943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.923 [2024-11-17 23:30:59.651982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:35.923 [2024-11-17 23:30:59.651995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:35.923 [2024-11-17 23:30:59.652005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.923 [2024-11-17 23:30:59.652044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.923 [2024-11-17 23:30:59.652052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:35.923 [2024-11-17 23:30:59.652059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:25:35.923 [2024-11-17 23:30:59.652065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.923 [2024-11-17 23:30:59.652084] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:35.923 [2024-11-17 23:30:59.652267] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:35.923 [2024-11-17 23:30:59.652284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.923 [2024-11-17 23:30:59.652291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:35.924 [2024-11-17 23:30:59.652301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:25:35.924 [2024-11-17 23:30:59.652311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.924 [2024-11-17 23:30:59.653553] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:35.924 [2024-11-17 23:30:59.655990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.924 [2024-11-17 23:30:59.656017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:35.924 [2024-11-17 23:30:59.656026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.438 ms 00:25:35.924 [2024-11-17 23:30:59.656032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.924 [2024-11-17 23:30:59.656081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.924 [2024-11-17 23:30:59.656089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:35.924 [2024-11-17 23:30:59.656095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:35.924 [2024-11-17 23:30:59.656101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.924 [2024-11-17 23:30:59.662203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.924 [2024-11-17 23:30:59.662232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:35.924 [2024-11-17 23:30:59.662242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.061 ms 00:25:35.924 [2024-11-17 23:30:59.662251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.924 [2024-11-17 23:30:59.662320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.924 [2024-11-17 23:30:59.662329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:35.924 [2024-11-17 23:30:59.662337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:25:35.924 [2024-11-17 23:30:59.662344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.924 [2024-11-17 23:30:59.662381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.924 [2024-11-17 23:30:59.662390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:35.924 [2024-11-17 23:30:59.662400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:35.924 [2024-11-17 23:30:59.662406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.924 [2024-11-17 23:30:59.662426] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:35.924 [2024-11-17 23:30:59.663939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.924 [2024-11-17 23:30:59.663963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:35.924 [2024-11-17 23:30:59.663971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.516 ms 00:25:35.924 [2024-11-17 23:30:59.663978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.924 [2024-11-17 23:30:59.664002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.924 [2024-11-17 23:30:59.664010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:35.924 [2024-11-17 23:30:59.664016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:35.924 [2024-11-17 23:30:59.664026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.924 [2024-11-17 23:30:59.664044] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:35.924 [2024-11-17 23:30:59.664060] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:35.924 [2024-11-17 23:30:59.664093] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:35.924 [2024-11-17 23:30:59.664106] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:35.924 [2024-11-17 23:30:59.664192] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:35.924 [2024-11-17 23:30:59.664206] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:35.924 [2024-11-17 23:30:59.664215] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:35.924 [2024-11-17 23:30:59.664226] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:35.924 [2024-11-17 23:30:59.664234] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:35.924 [2024-11-17 23:30:59.664243] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:35.924 [2024-11-17 23:30:59.664249] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:35.924 [2024-11-17 23:30:59.664256] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:35.924 [2024-11-17 23:30:59.664262] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:35.924 [2024-11-17 23:30:59.664268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.924 [2024-11-17 23:30:59.664274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:35.924 [2024-11-17 23:30:59.664282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:25:35.924 [2024-11-17 23:30:59.664291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.924 [2024-11-17 23:30:59.664354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.924 [2024-11-17 23:30:59.664363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:35.924 [2024-11-17 23:30:59.664372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:25:35.924 [2024-11-17 23:30:59.664377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.924 [2024-11-17 23:30:59.664453] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:35.924 [2024-11-17 23:30:59.664461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:35.924 [2024-11-17 23:30:59.664468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:35.924 [2024-11-17 23:30:59.664477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.924 [2024-11-17 23:30:59.664491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:35.924 [2024-11-17 23:30:59.664497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:35.924 [2024-11-17 23:30:59.664503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:35.924 [2024-11-17 23:30:59.664508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:35.924 [2024-11-17 23:30:59.664514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:35.924 [2024-11-17 23:30:59.664518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:35.924 [2024-11-17 23:30:59.664526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:35.924 [2024-11-17 23:30:59.664531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:35.924 [2024-11-17 23:30:59.664535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:35.924 [2024-11-17 23:30:59.664541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:35.924 [2024-11-17 23:30:59.664546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:35.924 [2024-11-17 23:30:59.664551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.924 [2024-11-17 23:30:59.664556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:35.924 [2024-11-17 23:30:59.664563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:35.924 [2024-11-17 23:30:59.664568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.924 [2024-11-17 23:30:59.664573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:35.924 [2024-11-17 23:30:59.664579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:35.924 [2024-11-17 23:30:59.664584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:35.924 [2024-11-17 23:30:59.664589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:35.924 [2024-11-17 23:30:59.664595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:35.924 [2024-11-17 23:30:59.664602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:35.924 [2024-11-17 23:30:59.664608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:35.924 [2024-11-17 23:30:59.664619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:35.925 [2024-11-17 23:30:59.664626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:35.925 [2024-11-17 23:30:59.664632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:35.925 [2024-11-17 23:30:59.664638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:35.925 [2024-11-17 23:30:59.664644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:35.925 [2024-11-17 23:30:59.664650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:35.925 [2024-11-17 23:30:59.664656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:35.925 [2024-11-17 23:30:59.664662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:35.925 [2024-11-17 23:30:59.664667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:35.925 [2024-11-17 23:30:59.664673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:35.925 [2024-11-17 23:30:59.664679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:35.925 [2024-11-17 23:30:59.664685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:35.925 [2024-11-17 23:30:59.664691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:35.925 [2024-11-17 23:30:59.664697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.925 [2024-11-17 23:30:59.664703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:35.925 [2024-11-17 23:30:59.664709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:35.925 [2024-11-17 23:30:59.664716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.925 [2024-11-17 23:30:59.664722] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:35.925 [2024-11-17 23:30:59.664731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:35.925 [2024-11-17 23:30:59.664740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:35.925 [2024-11-17 23:30:59.664746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.925 [2024-11-17 23:30:59.664753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:35.925 [2024-11-17 23:30:59.664759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:35.925 [2024-11-17 23:30:59.664766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:35.925 [2024-11-17 23:30:59.664773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:35.925 [2024-11-17 23:30:59.664780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:35.925 [2024-11-17 23:30:59.664786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:35.925 [2024-11-17 23:30:59.664793] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:35.925 [2024-11-17 23:30:59.664802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:35.925 [2024-11-17 23:30:59.664809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:35.925 [2024-11-17 23:30:59.664816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:35.925 [2024-11-17 23:30:59.664823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:35.925 [2024-11-17 23:30:59.664830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:35.925 [2024-11-17 23:30:59.664836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:35.925 [2024-11-17 23:30:59.664842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:35.925 [2024-11-17 23:30:59.664848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:35.925 [2024-11-17 23:30:59.664856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:35.925 [2024-11-17 23:30:59.664862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:35.925 [2024-11-17 23:30:59.664868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:35.925 [2024-11-17 23:30:59.664875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:35.925 [2024-11-17 23:30:59.664912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:35.925 [2024-11-17 23:30:59.664918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:35.925 [2024-11-17 23:30:59.664925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:35.925 [2024-11-17 23:30:59.664932] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:35.925 [2024-11-17 23:30:59.664939] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:35.925 [2024-11-17 23:30:59.664945] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:35.925 [2024-11-17 23:30:59.664952] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:35.925 [2024-11-17 23:30:59.664958] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:35.925 [2024-11-17 23:30:59.664967] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:35.925 [2024-11-17 23:30:59.664973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.925 [2024-11-17 23:30:59.664981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:35.925 [2024-11-17 23:30:59.664987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:25:35.925 [2024-11-17 23:30:59.664993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.925 [2024-11-17 23:30:59.676077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.925 [2024-11-17 23:30:59.676103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:35.925 [2024-11-17 23:30:59.676112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.045 ms 00:25:35.925 [2024-11-17 23:30:59.676118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.925 [2024-11-17 23:30:59.676182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.925 [2024-11-17 23:30:59.676188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:35.925 [2024-11-17 23:30:59.676194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:25:35.925 [2024-11-17 23:30:59.676200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.925 [2024-11-17 23:30:59.693221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.925 [2024-11-17 23:30:59.693254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:35.925 [2024-11-17 23:30:59.693268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.985 ms 00:25:35.925 [2024-11-17 23:30:59.693275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.925 [2024-11-17 23:30:59.693307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.925 [2024-11-17 23:30:59.693315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:35.925 [2024-11-17 23:30:59.693325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:35.925 [2024-11-17 23:30:59.693331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.925 [2024-11-17 23:30:59.693739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.925 [2024-11-17 23:30:59.693766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:35.925 [2024-11-17 23:30:59.693774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:25:35.925 [2024-11-17 23:30:59.693780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.925 [2024-11-17 23:30:59.693905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.925 [2024-11-17 23:30:59.693926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:35.925 [2024-11-17 23:30:59.693934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:25:35.925 [2024-11-17 23:30:59.693942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.925 [2024-11-17 23:30:59.700377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.925 [2024-11-17 23:30:59.700418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:35.925 [2024-11-17 23:30:59.700429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.416 ms 00:25:35.925 [2024-11-17 23:30:59.700437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.925 [2024-11-17 23:30:59.703671] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:35.925 [2024-11-17 23:30:59.703706] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:35.925 [2024-11-17 23:30:59.703724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.925 [2024-11-17 23:30:59.703734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:35.925 [2024-11-17 23:30:59.703743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.186 ms 00:25:35.925 [2024-11-17 23:30:59.703752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.925 [2024-11-17 23:30:59.716518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.925 [2024-11-17 23:30:59.716551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:35.925 [2024-11-17 23:30:59.716563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.727 ms 00:25:35.925 [2024-11-17 23:30:59.716569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.925 [2024-11-17 23:30:59.718667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.925 [2024-11-17 23:30:59.718692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:35.925 [2024-11-17 23:30:59.718699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.065 ms 00:25:35.925 [2024-11-17 23:30:59.718704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.925 [2024-11-17 23:30:59.720412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.925 [2024-11-17 23:30:59.720434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:35.925 [2024-11-17 23:30:59.720441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.682 ms 00:25:35.925 [2024-11-17 23:30:59.720452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.925 [2024-11-17 23:30:59.720710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.925 [2024-11-17 23:30:59.720720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:35.926 [2024-11-17 23:30:59.720727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:25:35.926 [2024-11-17 23:30:59.720733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.926 [2024-11-17 23:30:59.739029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.926 [2024-11-17 23:30:59.739074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:35.926 [2024-11-17 23:30:59.739085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.281 ms 00:25:35.926 [2024-11-17 23:30:59.739091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.184 [2024-11-17 23:30:59.745057] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:36.184 [2024-11-17 23:30:59.747316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.184 [2024-11-17 23:30:59.747341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:36.184 [2024-11-17 23:30:59.747358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.172 ms 00:25:36.184 [2024-11-17 23:30:59.747365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.184 [2024-11-17 23:30:59.747411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.184 [2024-11-17 23:30:59.747420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:36.184 [2024-11-17 23:30:59.747427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:36.184 [2024-11-17 23:30:59.747434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.184 [2024-11-17 23:30:59.748106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.184 [2024-11-17 23:30:59.748132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:36.184 [2024-11-17 23:30:59.748142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.610 ms 00:25:36.184 [2024-11-17 23:30:59.748149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.184 [2024-11-17 23:30:59.748171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.184 [2024-11-17 23:30:59.748177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:36.184 [2024-11-17 23:30:59.748183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:36.184 [2024-11-17 23:30:59.748189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.184 [2024-11-17 23:30:59.748221] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:36.184 [2024-11-17 23:30:59.748230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.184 [2024-11-17 23:30:59.748236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:36.184 [2024-11-17 23:30:59.748242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:36.184 [2024-11-17 23:30:59.748252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.184 [2024-11-17 23:30:59.751664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.184 [2024-11-17 23:30:59.751693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:36.184 [2024-11-17 23:30:59.751701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.395 ms 00:25:36.184 [2024-11-17 23:30:59.751708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.184 [2024-11-17 23:30:59.751766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.184 [2024-11-17 23:30:59.751777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:36.184 [2024-11-17 23:30:59.751785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:25:36.184 [2024-11-17 23:30:59.751791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.184 [2024-11-17 23:30:59.752754] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 100.353 ms, result 0 00:25:37.127  [2024-11-17T23:31:01.892Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-17T23:31:03.278Z] Copying: 35/1024 [MB] (15 MBps) [2024-11-17T23:31:04.218Z] Copying: 51/1024 [MB] (16 MBps) [2024-11-17T23:31:05.168Z] Copying: 66/1024 [MB] (14 MBps) [2024-11-17T23:31:06.112Z] Copying: 88/1024 [MB] (22 MBps) [2024-11-17T23:31:07.055Z] Copying: 103/1024 [MB] (14 MBps) [2024-11-17T23:31:07.994Z] Copying: 124/1024 [MB] (20 MBps) [2024-11-17T23:31:09.060Z] Copying: 138/1024 [MB] (14 MBps) [2024-11-17T23:31:09.994Z] Copying: 152/1024 [MB] (13 MBps) [2024-11-17T23:31:10.934Z] Copying: 168/1024 [MB] (16 MBps) [2024-11-17T23:31:12.310Z] Copying: 180/1024 [MB] (12 MBps) [2024-11-17T23:31:12.896Z] Copying: 192/1024 [MB] (11 MBps) [2024-11-17T23:31:14.277Z] Copying: 204/1024 [MB] (11 MBps) [2024-11-17T23:31:15.213Z] Copying: 215/1024 [MB] (11 MBps) [2024-11-17T23:31:16.152Z] Copying: 227/1024 [MB] (11 MBps) [2024-11-17T23:31:17.094Z] Copying: 241/1024 [MB] (14 MBps) [2024-11-17T23:31:18.035Z] Copying: 255/1024 [MB] (14 MBps) [2024-11-17T23:31:18.970Z] Copying: 270/1024 [MB] (14 MBps) [2024-11-17T23:31:19.905Z] Copying: 281/1024 [MB] (11 MBps) [2024-11-17T23:31:21.290Z] Copying: 292/1024 [MB] (10 MBps) [2024-11-17T23:31:22.227Z] Copying: 302/1024 [MB] (10 MBps) [2024-11-17T23:31:23.164Z] Copying: 312/1024 [MB] (10 MBps) [2024-11-17T23:31:24.100Z] Copying: 324/1024 [MB] (11 MBps) [2024-11-17T23:31:25.034Z] Copying: 335/1024 [MB] (10 MBps) [2024-11-17T23:31:25.965Z] Copying: 347/1024 [MB] (12 MBps) [2024-11-17T23:31:26.900Z] Copying: 359/1024 [MB] (12 MBps) [2024-11-17T23:31:28.279Z] Copying: 371/1024 [MB] (11 MBps) [2024-11-17T23:31:29.213Z] Copying: 381/1024 [MB] (10 MBps) [2024-11-17T23:31:30.151Z] Copying: 393/1024 [MB] (12 MBps) [2024-11-17T23:31:31.089Z] Copying: 405/1024 [MB] (11 MBps) [2024-11-17T23:31:32.024Z] Copying: 417/1024 [MB] (11 MBps) [2024-11-17T23:31:32.977Z] Copying: 429/1024 [MB] (12 MBps) [2024-11-17T23:31:33.911Z] Copying: 440/1024 [MB] (10 MBps) [2024-11-17T23:31:35.286Z] Copying: 451/1024 [MB] (11 MBps) [2024-11-17T23:31:36.218Z] Copying: 463/1024 [MB] (11 MBps) [2024-11-17T23:31:37.151Z] Copying: 475/1024 [MB] (11 MBps) [2024-11-17T23:31:38.422Z] Copying: 487/1024 [MB] (12 MBps) [2024-11-17T23:31:38.988Z] Copying: 498/1024 [MB] (11 MBps) [2024-11-17T23:31:39.919Z] Copying: 509/1024 [MB] (10 MBps) [2024-11-17T23:31:41.300Z] Copying: 521/1024 [MB] (12 MBps) [2024-11-17T23:31:42.235Z] Copying: 532/1024 [MB] (11 MBps) [2024-11-17T23:31:43.168Z] Copying: 544/1024 [MB] (11 MBps) [2024-11-17T23:31:44.102Z] Copying: 555/1024 [MB] (11 MBps) [2024-11-17T23:31:45.037Z] Copying: 567/1024 [MB] (11 MBps) [2024-11-17T23:31:45.976Z] Copying: 579/1024 [MB] (11 MBps) [2024-11-17T23:31:46.911Z] Copying: 590/1024 [MB] (10 MBps) [2024-11-17T23:31:48.289Z] Copying: 602/1024 [MB] (11 MBps) [2024-11-17T23:31:49.226Z] Copying: 614/1024 [MB] (11 MBps) [2024-11-17T23:31:50.157Z] Copying: 625/1024 [MB] (11 MBps) [2024-11-17T23:31:51.091Z] Copying: 637/1024 [MB] (11 MBps) [2024-11-17T23:31:52.084Z] Copying: 648/1024 [MB] (11 MBps) [2024-11-17T23:31:53.080Z] Copying: 660/1024 [MB] (11 MBps) [2024-11-17T23:31:54.021Z] Copying: 671/1024 [MB] (11 MBps) [2024-11-17T23:31:54.963Z] Copying: 682/1024 [MB] (11 MBps) [2024-11-17T23:31:55.903Z] Copying: 694/1024 [MB] (11 MBps) [2024-11-17T23:31:57.279Z] Copying: 706/1024 [MB] (11 MBps) [2024-11-17T23:31:58.214Z] Copying: 717/1024 [MB] (11 MBps) [2024-11-17T23:31:59.149Z] Copying: 729/1024 [MB] (11 MBps) [2024-11-17T23:32:00.087Z] Copying: 741/1024 [MB] (11 MBps) [2024-11-17T23:32:01.022Z] Copying: 752/1024 [MB] (11 MBps) [2024-11-17T23:32:01.957Z] Copying: 763/1024 [MB] (11 MBps) [2024-11-17T23:32:02.891Z] Copying: 774/1024 [MB] (11 MBps) [2024-11-17T23:32:04.266Z] Copying: 786/1024 [MB] (11 MBps) [2024-11-17T23:32:05.204Z] Copying: 798/1024 [MB] (11 MBps) [2024-11-17T23:32:06.139Z] Copying: 809/1024 [MB] (10 MBps) [2024-11-17T23:32:07.089Z] Copying: 821/1024 [MB] (11 MBps) [2024-11-17T23:32:08.025Z] Copying: 833/1024 [MB] (11 MBps) [2024-11-17T23:32:09.094Z] Copying: 844/1024 [MB] (11 MBps) [2024-11-17T23:32:10.042Z] Copying: 856/1024 [MB] (11 MBps) [2024-11-17T23:32:10.979Z] Copying: 867/1024 [MB] (11 MBps) [2024-11-17T23:32:11.927Z] Copying: 879/1024 [MB] (11 MBps) [2024-11-17T23:32:13.316Z] Copying: 889/1024 [MB] (10 MBps) [2024-11-17T23:32:14.250Z] Copying: 902/1024 [MB] (12 MBps) [2024-11-17T23:32:15.184Z] Copying: 914/1024 [MB] (12 MBps) [2024-11-17T23:32:16.121Z] Copying: 926/1024 [MB] (12 MBps) [2024-11-17T23:32:17.057Z] Copying: 947/1024 [MB] (20 MBps) [2024-11-17T23:32:17.993Z] Copying: 958/1024 [MB] (11 MBps) [2024-11-17T23:32:18.989Z] Copying: 970/1024 [MB] (11 MBps) [2024-11-17T23:32:19.928Z] Copying: 982/1024 [MB] (11 MBps) [2024-11-17T23:32:21.317Z] Copying: 993/1024 [MB] (11 MBps) [2024-11-17T23:32:21.889Z] Copying: 1004/1024 [MB] (10 MBps) [2024-11-17T23:32:22.149Z] Copying: 1016/1024 [MB] (12 MBps) [2024-11-17T23:32:22.412Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-11-17 23:32:22.314839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.591 [2024-11-17 23:32:22.314931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:58.591 [2024-11-17 23:32:22.314946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:58.591 [2024-11-17 23:32:22.314955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.591 [2024-11-17 23:32:22.314988] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:58.591 [2024-11-17 23:32:22.315711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.591 [2024-11-17 23:32:22.315745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:58.591 [2024-11-17 23:32:22.315764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.708 ms 00:26:58.591 [2024-11-17 23:32:22.315773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.591 [2024-11-17 23:32:22.316016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.591 [2024-11-17 23:32:22.316028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:58.591 [2024-11-17 23:32:22.316038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:26:58.591 [2024-11-17 23:32:22.316046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.591 [2024-11-17 23:32:22.319500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.592 [2024-11-17 23:32:22.319523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:58.592 [2024-11-17 23:32:22.319533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.435 ms 00:26:58.592 [2024-11-17 23:32:22.319541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.592 [2024-11-17 23:32:22.326973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.592 [2024-11-17 23:32:22.327014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:58.592 [2024-11-17 23:32:22.327025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.408 ms 00:26:58.592 [2024-11-17 23:32:22.327033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.592 [2024-11-17 23:32:22.329509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.592 [2024-11-17 23:32:22.329683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:58.592 [2024-11-17 23:32:22.329701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.408 ms 00:26:58.592 [2024-11-17 23:32:22.329708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.592 [2024-11-17 23:32:22.333576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.592 [2024-11-17 23:32:22.333625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:58.592 [2024-11-17 23:32:22.333638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.766 ms 00:26:58.592 [2024-11-17 23:32:22.333646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.592 [2024-11-17 23:32:22.336058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.592 [2024-11-17 23:32:22.336230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:58.592 [2024-11-17 23:32:22.336248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.360 ms 00:26:58.592 [2024-11-17 23:32:22.336266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.592 [2024-11-17 23:32:22.338459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.592 [2024-11-17 23:32:22.338496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:58.592 [2024-11-17 23:32:22.338505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.169 ms 00:26:58.592 [2024-11-17 23:32:22.338512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.592 [2024-11-17 23:32:22.340331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.592 [2024-11-17 23:32:22.340369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:58.592 [2024-11-17 23:32:22.340379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.783 ms 00:26:58.592 [2024-11-17 23:32:22.340385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.592 [2024-11-17 23:32:22.341799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.592 [2024-11-17 23:32:22.341840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:58.592 [2024-11-17 23:32:22.341849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.379 ms 00:26:58.592 [2024-11-17 23:32:22.341856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.592 [2024-11-17 23:32:22.343272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.592 [2024-11-17 23:32:22.343414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:58.592 [2024-11-17 23:32:22.343431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.335 ms 00:26:58.592 [2024-11-17 23:32:22.343439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.592 [2024-11-17 23:32:22.343470] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:58.592 [2024-11-17 23:32:22.343484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:58.592 [2024-11-17 23:32:22.343495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:26:58.592 [2024-11-17 23:32:22.343504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:58.592 [2024-11-17 23:32:22.343967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:58.593 [2024-11-17 23:32:22.343974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:58.593 [2024-11-17 23:32:22.343986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:58.593 [2024-11-17 23:32:22.343994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:58.593 [2024-11-17 23:32:22.344001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:58.593 [2024-11-17 23:32:22.344009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:58.593 [2024-11-17 23:32:22.344017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:58.593 [2024-11-17 23:32:22.344024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:58.593 [2024-11-17 23:32:22.344032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:58.593 [2024-11-17 23:32:22.344040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:58.593 [2024-11-17 23:32:22.344047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:58.593 [2024-11-17 23:32:22.344055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:58.593 [2024-11-17 23:32:22.344063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:58.593 [2024-11-17 23:32:22.344070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:58.593 [2024-11-17 23:32:22.344077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:58.593 [2024-11-17 23:32:22.344085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:58.593 [2024-11-17 23:32:22.344093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:58.593 [2024-11-17 23:32:22.344101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:58.593 [2024-11-17 23:32:22.344108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:58.593 [2024-11-17 23:32:22.344115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:58.593 [2024-11-17 23:32:22.344122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:58.593 [2024-11-17 23:32:22.344129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:58.593 [2024-11-17 23:32:22.344137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:58.593 [2024-11-17 23:32:22.344144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:58.593 [2024-11-17 23:32:22.344152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:58.593 [2024-11-17 23:32:22.344159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:58.593 [2024-11-17 23:32:22.344167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:58.593 [2024-11-17 23:32:22.344175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:58.593 [2024-11-17 23:32:22.344182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:58.593 [2024-11-17 23:32:22.344190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:58.593 [2024-11-17 23:32:22.344197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:58.593 [2024-11-17 23:32:22.344204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:58.593 [2024-11-17 23:32:22.344211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:58.593 [2024-11-17 23:32:22.344218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:58.593 [2024-11-17 23:32:22.344226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:58.593 [2024-11-17 23:32:22.344235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:58.593 [2024-11-17 23:32:22.344243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:58.593 [2024-11-17 23:32:22.344251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:58.593 [2024-11-17 23:32:22.344258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:58.593 [2024-11-17 23:32:22.344266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:58.593 [2024-11-17 23:32:22.344273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:58.593 [2024-11-17 23:32:22.344289] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:58.593 [2024-11-17 23:32:22.344298] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 760ca6d2-be61-49fd-8ade-5ebf83308e21 00:26:58.593 [2024-11-17 23:32:22.344306] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:26:58.593 [2024-11-17 23:32:22.344313] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:58.593 [2024-11-17 23:32:22.344320] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:58.593 [2024-11-17 23:32:22.344328] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:58.593 [2024-11-17 23:32:22.344336] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:58.593 [2024-11-17 23:32:22.344343] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:58.593 [2024-11-17 23:32:22.344351] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:58.593 [2024-11-17 23:32:22.344357] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:58.593 [2024-11-17 23:32:22.344374] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:58.593 [2024-11-17 23:32:22.344382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.593 [2024-11-17 23:32:22.344389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:58.593 [2024-11-17 23:32:22.344404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.912 ms 00:26:58.593 [2024-11-17 23:32:22.344411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.593 [2024-11-17 23:32:22.346370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.593 [2024-11-17 23:32:22.346396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:58.593 [2024-11-17 23:32:22.346406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.941 ms 00:26:58.593 [2024-11-17 23:32:22.346414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.593 [2024-11-17 23:32:22.346519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.593 [2024-11-17 23:32:22.346528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:58.593 [2024-11-17 23:32:22.346537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:26:58.593 [2024-11-17 23:32:22.346544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.593 [2024-11-17 23:32:22.352862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.593 [2024-11-17 23:32:22.353008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:58.593 [2024-11-17 23:32:22.353064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.593 [2024-11-17 23:32:22.353087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.593 [2024-11-17 23:32:22.353159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.593 [2024-11-17 23:32:22.353180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:58.593 [2024-11-17 23:32:22.353200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.593 [2024-11-17 23:32:22.353219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.593 [2024-11-17 23:32:22.353286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.593 [2024-11-17 23:32:22.353347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:58.593 [2024-11-17 23:32:22.353367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.593 [2024-11-17 23:32:22.353385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.593 [2024-11-17 23:32:22.353415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.593 [2024-11-17 23:32:22.353435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:58.593 [2024-11-17 23:32:22.353454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.593 [2024-11-17 23:32:22.353511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.593 [2024-11-17 23:32:22.364609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.593 [2024-11-17 23:32:22.364764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:58.593 [2024-11-17 23:32:22.364791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.593 [2024-11-17 23:32:22.364799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.593 [2024-11-17 23:32:22.373385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.593 [2024-11-17 23:32:22.373428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:58.593 [2024-11-17 23:32:22.373439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.593 [2024-11-17 23:32:22.373447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.593 [2024-11-17 23:32:22.373489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.593 [2024-11-17 23:32:22.373498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:58.593 [2024-11-17 23:32:22.373506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.593 [2024-11-17 23:32:22.373514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.593 [2024-11-17 23:32:22.373539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.593 [2024-11-17 23:32:22.373548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:58.593 [2024-11-17 23:32:22.373563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.593 [2024-11-17 23:32:22.373570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.593 [2024-11-17 23:32:22.373641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.593 [2024-11-17 23:32:22.373651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:58.593 [2024-11-17 23:32:22.373659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.593 [2024-11-17 23:32:22.373667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.593 [2024-11-17 23:32:22.373699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.594 [2024-11-17 23:32:22.373708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:58.594 [2024-11-17 23:32:22.373719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.594 [2024-11-17 23:32:22.373726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.594 [2024-11-17 23:32:22.373772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.594 [2024-11-17 23:32:22.373781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:58.594 [2024-11-17 23:32:22.373789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.594 [2024-11-17 23:32:22.373796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.594 [2024-11-17 23:32:22.373835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.594 [2024-11-17 23:32:22.373844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:58.594 [2024-11-17 23:32:22.373855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.594 [2024-11-17 23:32:22.373863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.594 [2024-11-17 23:32:22.374190] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 59.311 ms, result 0 00:26:58.855 00:26:58.855 00:26:58.855 23:32:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:01.395 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:27:01.395 23:32:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:27:01.395 23:32:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:27:01.395 23:32:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:01.395 23:32:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:01.395 23:32:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:01.395 23:32:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:01.395 23:32:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:01.395 Process with pid 88333 is not found 00:27:01.395 23:32:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 88333 00:27:01.395 23:32:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 88333 ']' 00:27:01.395 23:32:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 88333 00:27:01.395 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (88333) - No such process 00:27:01.395 23:32:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 88333 is not found' 00:27:01.395 23:32:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:27:01.653 Remove shared memory files 00:27:01.653 23:32:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:27:01.653 23:32:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:01.653 23:32:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:27:01.653 23:32:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:27:01.653 23:32:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:27:01.653 23:32:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:01.653 23:32:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:27:01.653 ************************************ 00:27:01.653 END TEST ftl_dirty_shutdown 00:27:01.653 ************************************ 00:27:01.653 00:27:01.653 real 4m58.207s 00:27:01.653 user 5m14.129s 00:27:01.653 sys 0m22.532s 00:27:01.653 23:32:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:01.653 23:32:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:01.653 23:32:25 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:01.653 23:32:25 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:01.653 23:32:25 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:01.653 23:32:25 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:01.653 ************************************ 00:27:01.653 START TEST ftl_upgrade_shutdown 00:27:01.653 ************************************ 00:27:01.653 23:32:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:01.653 * Looking for test storage... 00:27:01.653 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:01.653 23:32:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:01.653 23:32:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:27:01.653 23:32:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:01.912 23:32:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:01.912 23:32:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:01.912 23:32:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:01.912 23:32:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:01.912 23:32:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:01.912 23:32:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:01.912 23:32:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:01.912 23:32:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:01.912 23:32:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:01.912 23:32:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:01.912 23:32:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:01.912 23:32:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:01.912 23:32:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:01.912 23:32:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:27:01.912 23:32:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:01.912 23:32:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:01.912 23:32:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:01.912 23:32:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:01.912 23:32:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:01.912 23:32:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:01.912 23:32:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:01.912 23:32:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:01.912 23:32:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:01.912 23:32:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:01.912 23:32:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:01.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.913 --rc genhtml_branch_coverage=1 00:27:01.913 --rc genhtml_function_coverage=1 00:27:01.913 --rc genhtml_legend=1 00:27:01.913 --rc geninfo_all_blocks=1 00:27:01.913 --rc geninfo_unexecuted_blocks=1 00:27:01.913 00:27:01.913 ' 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:01.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.913 --rc genhtml_branch_coverage=1 00:27:01.913 --rc genhtml_function_coverage=1 00:27:01.913 --rc genhtml_legend=1 00:27:01.913 --rc geninfo_all_blocks=1 00:27:01.913 --rc geninfo_unexecuted_blocks=1 00:27:01.913 00:27:01.913 ' 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:01.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.913 --rc genhtml_branch_coverage=1 00:27:01.913 --rc genhtml_function_coverage=1 00:27:01.913 --rc genhtml_legend=1 00:27:01.913 --rc geninfo_all_blocks=1 00:27:01.913 --rc geninfo_unexecuted_blocks=1 00:27:01.913 00:27:01.913 ' 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:01.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.913 --rc genhtml_branch_coverage=1 00:27:01.913 --rc genhtml_function_coverage=1 00:27:01.913 --rc genhtml_legend=1 00:27:01.913 --rc geninfo_all_blocks=1 00:27:01.913 --rc geninfo_unexecuted_blocks=1 00:27:01.913 00:27:01.913 ' 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=91545 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 91545 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 91545 ']' 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:01.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:01.913 23:32:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:01.913 [2024-11-17 23:32:25.617156] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:27:01.913 [2024-11-17 23:32:25.617531] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91545 ] 00:27:02.173 [2024-11-17 23:32:25.761779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.173 [2024-11-17 23:32:25.791223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.743 23:32:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:02.743 23:32:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:02.743 23:32:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:02.743 23:32:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:27:02.743 23:32:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:27:02.743 23:32:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:02.743 23:32:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:27:02.743 23:32:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:02.743 23:32:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:27:02.743 23:32:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:02.743 23:32:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:27:02.743 23:32:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:02.743 23:32:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:27:02.743 23:32:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:02.743 23:32:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:27:02.743 23:32:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:02.743 23:32:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:27:02.743 23:32:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:27:02.743 23:32:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:27:02.743 23:32:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:02.743 23:32:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:27:02.743 23:32:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:02.743 23:32:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:27:03.004 23:32:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:27:03.004 23:32:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:03.004 23:32:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:27:03.004 23:32:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:27:03.004 23:32:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:03.004 23:32:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:03.004 23:32:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:03.004 23:32:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:27:03.265 23:32:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:03.265 { 00:27:03.265 "name": "basen1", 00:27:03.265 "aliases": [ 00:27:03.265 "a0ba873c-8021-46e8-a897-602eabf20895" 00:27:03.265 ], 00:27:03.265 "product_name": "NVMe disk", 00:27:03.265 "block_size": 4096, 00:27:03.265 "num_blocks": 1310720, 00:27:03.265 "uuid": "a0ba873c-8021-46e8-a897-602eabf20895", 00:27:03.265 "numa_id": -1, 00:27:03.265 "assigned_rate_limits": { 00:27:03.265 "rw_ios_per_sec": 0, 00:27:03.265 "rw_mbytes_per_sec": 0, 00:27:03.265 "r_mbytes_per_sec": 0, 00:27:03.265 "w_mbytes_per_sec": 0 00:27:03.265 }, 00:27:03.265 "claimed": true, 00:27:03.265 "claim_type": "read_many_write_one", 00:27:03.265 "zoned": false, 00:27:03.265 "supported_io_types": { 00:27:03.265 "read": true, 00:27:03.265 "write": true, 00:27:03.265 "unmap": true, 00:27:03.265 "flush": true, 00:27:03.265 "reset": true, 00:27:03.265 "nvme_admin": true, 00:27:03.265 "nvme_io": true, 00:27:03.265 "nvme_io_md": false, 00:27:03.265 "write_zeroes": true, 00:27:03.265 "zcopy": false, 00:27:03.265 "get_zone_info": false, 00:27:03.265 "zone_management": false, 00:27:03.265 "zone_append": false, 00:27:03.265 "compare": true, 00:27:03.265 "compare_and_write": false, 00:27:03.265 "abort": true, 00:27:03.265 "seek_hole": false, 00:27:03.265 "seek_data": false, 00:27:03.265 "copy": true, 00:27:03.265 "nvme_iov_md": false 00:27:03.265 }, 00:27:03.265 "driver_specific": { 00:27:03.265 "nvme": [ 00:27:03.265 { 00:27:03.265 "pci_address": "0000:00:11.0", 00:27:03.265 "trid": { 00:27:03.265 "trtype": "PCIe", 00:27:03.265 "traddr": "0000:00:11.0" 00:27:03.265 }, 00:27:03.265 "ctrlr_data": { 00:27:03.265 "cntlid": 0, 00:27:03.265 "vendor_id": "0x1b36", 00:27:03.265 "model_number": "QEMU NVMe Ctrl", 00:27:03.265 "serial_number": "12341", 00:27:03.265 "firmware_revision": "8.0.0", 00:27:03.265 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:03.265 "oacs": { 00:27:03.265 "security": 0, 00:27:03.265 "format": 1, 00:27:03.265 "firmware": 0, 00:27:03.265 "ns_manage": 1 00:27:03.265 }, 00:27:03.265 "multi_ctrlr": false, 00:27:03.265 "ana_reporting": false 00:27:03.265 }, 00:27:03.265 "vs": { 00:27:03.265 "nvme_version": "1.4" 00:27:03.265 }, 00:27:03.265 "ns_data": { 00:27:03.265 "id": 1, 00:27:03.265 "can_share": false 00:27:03.265 } 00:27:03.265 } 00:27:03.265 ], 00:27:03.265 "mp_policy": "active_passive" 00:27:03.265 } 00:27:03.265 } 00:27:03.265 ]' 00:27:03.265 23:32:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:03.265 23:32:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:03.265 23:32:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:03.265 23:32:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:27:03.265 23:32:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:27:03.265 23:32:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:27:03.265 23:32:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:03.265 23:32:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:27:03.265 23:32:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:03.265 23:32:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:03.265 23:32:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:03.524 23:32:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=931d8d75-da7f-4eac-a085-bf93a7345cc9 00:27:03.524 23:32:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:03.524 23:32:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 931d8d75-da7f-4eac-a085-bf93a7345cc9 00:27:03.783 23:32:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:27:04.042 23:32:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=f9f58f5e-ee97-4ed2-9917-e676ca3cb10a 00:27:04.042 23:32:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u f9f58f5e-ee97-4ed2-9917-e676ca3cb10a 00:27:04.301 23:32:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=343b7b20-cded-4e8f-b0e0-f463c5c5984a 00:27:04.301 23:32:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 343b7b20-cded-4e8f-b0e0-f463c5c5984a ]] 00:27:04.301 23:32:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 343b7b20-cded-4e8f-b0e0-f463c5c5984a 5120 00:27:04.301 23:32:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:27:04.301 23:32:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:04.301 23:32:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=343b7b20-cded-4e8f-b0e0-f463c5c5984a 00:27:04.301 23:32:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:27:04.301 23:32:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 343b7b20-cded-4e8f-b0e0-f463c5c5984a 00:27:04.301 23:32:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=343b7b20-cded-4e8f-b0e0-f463c5c5984a 00:27:04.301 23:32:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:04.301 23:32:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:04.301 23:32:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:04.301 23:32:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 343b7b20-cded-4e8f-b0e0-f463c5c5984a 00:27:04.301 23:32:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:04.301 { 00:27:04.301 "name": "343b7b20-cded-4e8f-b0e0-f463c5c5984a", 00:27:04.301 "aliases": [ 00:27:04.301 "lvs/basen1p0" 00:27:04.301 ], 00:27:04.301 "product_name": "Logical Volume", 00:27:04.301 "block_size": 4096, 00:27:04.301 "num_blocks": 5242880, 00:27:04.301 "uuid": "343b7b20-cded-4e8f-b0e0-f463c5c5984a", 00:27:04.301 "assigned_rate_limits": { 00:27:04.301 "rw_ios_per_sec": 0, 00:27:04.301 "rw_mbytes_per_sec": 0, 00:27:04.301 "r_mbytes_per_sec": 0, 00:27:04.301 "w_mbytes_per_sec": 0 00:27:04.301 }, 00:27:04.301 "claimed": false, 00:27:04.301 "zoned": false, 00:27:04.301 "supported_io_types": { 00:27:04.301 "read": true, 00:27:04.301 "write": true, 00:27:04.301 "unmap": true, 00:27:04.301 "flush": false, 00:27:04.301 "reset": true, 00:27:04.301 "nvme_admin": false, 00:27:04.301 "nvme_io": false, 00:27:04.301 "nvme_io_md": false, 00:27:04.301 "write_zeroes": true, 00:27:04.301 "zcopy": false, 00:27:04.301 "get_zone_info": false, 00:27:04.301 "zone_management": false, 00:27:04.301 "zone_append": false, 00:27:04.301 "compare": false, 00:27:04.301 "compare_and_write": false, 00:27:04.301 "abort": false, 00:27:04.301 "seek_hole": true, 00:27:04.301 "seek_data": true, 00:27:04.301 "copy": false, 00:27:04.301 "nvme_iov_md": false 00:27:04.301 }, 00:27:04.301 "driver_specific": { 00:27:04.301 "lvol": { 00:27:04.301 "lvol_store_uuid": "f9f58f5e-ee97-4ed2-9917-e676ca3cb10a", 00:27:04.301 "base_bdev": "basen1", 00:27:04.302 "thin_provision": true, 00:27:04.302 "num_allocated_clusters": 0, 00:27:04.302 "snapshot": false, 00:27:04.302 "clone": false, 00:27:04.302 "esnap_clone": false 00:27:04.302 } 00:27:04.302 } 00:27:04.302 } 00:27:04.302 ]' 00:27:04.302 23:32:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:04.561 23:32:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:04.561 23:32:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:04.561 23:32:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:27:04.561 23:32:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:27:04.561 23:32:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:27:04.561 23:32:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:27:04.561 23:32:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:04.561 23:32:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:27:04.822 23:32:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:27:04.822 23:32:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:27:04.823 23:32:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:27:05.084 23:32:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:27:05.084 23:32:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:27:05.084 23:32:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 343b7b20-cded-4e8f-b0e0-f463c5c5984a -c cachen1p0 --l2p_dram_limit 2 00:27:05.084 [2024-11-17 23:32:28.871470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.084 [2024-11-17 23:32:28.871546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:05.084 [2024-11-17 23:32:28.871563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:05.084 [2024-11-17 23:32:28.871574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.084 [2024-11-17 23:32:28.871639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.084 [2024-11-17 23:32:28.871653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:05.084 [2024-11-17 23:32:28.871670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:27:05.084 [2024-11-17 23:32:28.871683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.084 [2024-11-17 23:32:28.871709] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:05.084 [2024-11-17 23:32:28.872055] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:05.084 [2024-11-17 23:32:28.872078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.084 [2024-11-17 23:32:28.872094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:05.084 [2024-11-17 23:32:28.872109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.379 ms 00:27:05.084 [2024-11-17 23:32:28.872120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.084 [2024-11-17 23:32:28.872157] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 0e11daf4-0220-4a96-85b9-19e494dd5570 00:27:05.084 [2024-11-17 23:32:28.874013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.084 [2024-11-17 23:32:28.874062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:27:05.084 [2024-11-17 23:32:28.874080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:27:05.084 [2024-11-17 23:32:28.874089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.084 [2024-11-17 23:32:28.883232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.084 [2024-11-17 23:32:28.883449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:05.084 [2024-11-17 23:32:28.883476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.047 ms 00:27:05.084 [2024-11-17 23:32:28.883485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.084 [2024-11-17 23:32:28.883552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.084 [2024-11-17 23:32:28.883568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:05.084 [2024-11-17 23:32:28.883580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:27:05.084 [2024-11-17 23:32:28.883591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.084 [2024-11-17 23:32:28.883651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.084 [2024-11-17 23:32:28.883662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:05.084 [2024-11-17 23:32:28.883673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:05.084 [2024-11-17 23:32:28.883681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.084 [2024-11-17 23:32:28.883707] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:05.084 [2024-11-17 23:32:28.886081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.084 [2024-11-17 23:32:28.886130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:05.084 [2024-11-17 23:32:28.886143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.382 ms 00:27:05.084 [2024-11-17 23:32:28.886160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.084 [2024-11-17 23:32:28.886195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.084 [2024-11-17 23:32:28.886211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:05.084 [2024-11-17 23:32:28.886220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:05.084 [2024-11-17 23:32:28.886234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.084 [2024-11-17 23:32:28.886264] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:27:05.084 [2024-11-17 23:32:28.886420] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:05.084 [2024-11-17 23:32:28.886434] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:05.084 [2024-11-17 23:32:28.886449] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:05.084 [2024-11-17 23:32:28.886461] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:05.084 [2024-11-17 23:32:28.886478] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:05.084 [2024-11-17 23:32:28.886489] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:05.084 [2024-11-17 23:32:28.886499] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:05.084 [2024-11-17 23:32:28.886509] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:05.084 [2024-11-17 23:32:28.886519] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:05.084 [2024-11-17 23:32:28.886526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.084 [2024-11-17 23:32:28.886540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:05.084 [2024-11-17 23:32:28.886548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.264 ms 00:27:05.084 [2024-11-17 23:32:28.886558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.084 [2024-11-17 23:32:28.886642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.084 [2024-11-17 23:32:28.886654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:05.084 [2024-11-17 23:32:28.886662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:27:05.084 [2024-11-17 23:32:28.886671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.084 [2024-11-17 23:32:28.886768] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:05.084 [2024-11-17 23:32:28.886780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:05.084 [2024-11-17 23:32:28.886788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:05.084 [2024-11-17 23:32:28.886798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:05.084 [2024-11-17 23:32:28.886805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:05.084 [2024-11-17 23:32:28.886814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:05.084 [2024-11-17 23:32:28.886822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:05.084 [2024-11-17 23:32:28.886833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:05.084 [2024-11-17 23:32:28.886840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:05.084 [2024-11-17 23:32:28.886849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:05.084 [2024-11-17 23:32:28.886855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:05.084 [2024-11-17 23:32:28.886865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:05.084 [2024-11-17 23:32:28.886872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:05.084 [2024-11-17 23:32:28.887110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:05.084 [2024-11-17 23:32:28.887138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:05.084 [2024-11-17 23:32:28.887158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:05.084 [2024-11-17 23:32:28.887177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:05.084 [2024-11-17 23:32:28.887198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:05.084 [2024-11-17 23:32:28.887216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:05.084 [2024-11-17 23:32:28.887237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:05.084 [2024-11-17 23:32:28.887256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:05.084 [2024-11-17 23:32:28.887277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:05.084 [2024-11-17 23:32:28.887362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:05.084 [2024-11-17 23:32:28.887388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:05.084 [2024-11-17 23:32:28.887406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:05.084 [2024-11-17 23:32:28.887425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:05.084 [2024-11-17 23:32:28.887439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:05.084 [2024-11-17 23:32:28.887456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:05.084 [2024-11-17 23:32:28.887471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:05.084 [2024-11-17 23:32:28.887488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:05.084 [2024-11-17 23:32:28.887699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:05.084 [2024-11-17 23:32:28.887720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:05.084 [2024-11-17 23:32:28.887735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:05.084 [2024-11-17 23:32:28.887754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:05.085 [2024-11-17 23:32:28.887769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:05.085 [2024-11-17 23:32:28.887785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:05.085 [2024-11-17 23:32:28.887851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:05.085 [2024-11-17 23:32:28.887866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:05.085 [2024-11-17 23:32:28.887898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:05.085 [2024-11-17 23:32:28.887916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:05.085 [2024-11-17 23:32:28.887969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:05.085 [2024-11-17 23:32:28.887990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:05.085 [2024-11-17 23:32:28.888109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:05.085 [2024-11-17 23:32:28.888122] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:05.085 [2024-11-17 23:32:28.888130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:05.085 [2024-11-17 23:32:28.888141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:05.085 [2024-11-17 23:32:28.888148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:05.085 [2024-11-17 23:32:28.888161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:05.085 [2024-11-17 23:32:28.888167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:05.085 [2024-11-17 23:32:28.888176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:05.085 [2024-11-17 23:32:28.888188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:05.085 [2024-11-17 23:32:28.888196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:05.085 [2024-11-17 23:32:28.888201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:05.085 [2024-11-17 23:32:28.888214] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:05.085 [2024-11-17 23:32:28.888227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:05.085 [2024-11-17 23:32:28.888236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:05.085 [2024-11-17 23:32:28.888242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:05.085 [2024-11-17 23:32:28.888250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:05.085 [2024-11-17 23:32:28.888256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:05.085 [2024-11-17 23:32:28.888264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:05.085 [2024-11-17 23:32:28.888270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:05.085 [2024-11-17 23:32:28.888280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:05.085 [2024-11-17 23:32:28.888286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:05.085 [2024-11-17 23:32:28.888294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:05.085 [2024-11-17 23:32:28.888299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:05.085 [2024-11-17 23:32:28.888306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:05.085 [2024-11-17 23:32:28.888312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:05.085 [2024-11-17 23:32:28.888319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:05.085 [2024-11-17 23:32:28.888325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:05.085 [2024-11-17 23:32:28.888332] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:05.085 [2024-11-17 23:32:28.888340] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:05.085 [2024-11-17 23:32:28.888348] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:05.085 [2024-11-17 23:32:28.888355] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:05.085 [2024-11-17 23:32:28.888362] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:05.085 [2024-11-17 23:32:28.888367] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:05.085 [2024-11-17 23:32:28.888378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.085 [2024-11-17 23:32:28.888385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:05.085 [2024-11-17 23:32:28.888395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.675 ms 00:27:05.085 [2024-11-17 23:32:28.888401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.085 [2024-11-17 23:32:28.888472] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:05.085 [2024-11-17 23:32:28.888484] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:09.292 [2024-11-17 23:32:32.778369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.292 [2024-11-17 23:32:32.778467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:09.292 [2024-11-17 23:32:32.778487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3889.870 ms 00:27:09.292 [2024-11-17 23:32:32.778497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.292 [2024-11-17 23:32:32.792040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.292 [2024-11-17 23:32:32.792095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:09.292 [2024-11-17 23:32:32.792113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.422 ms 00:27:09.292 [2024-11-17 23:32:32.792122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.292 [2024-11-17 23:32:32.792187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.292 [2024-11-17 23:32:32.792200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:09.292 [2024-11-17 23:32:32.792212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:27:09.292 [2024-11-17 23:32:32.792220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.292 [2024-11-17 23:32:32.804928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.292 [2024-11-17 23:32:32.804973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:09.292 [2024-11-17 23:32:32.804987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.654 ms 00:27:09.292 [2024-11-17 23:32:32.804996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.292 [2024-11-17 23:32:32.805035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.292 [2024-11-17 23:32:32.805044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:09.292 [2024-11-17 23:32:32.805055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:09.292 [2024-11-17 23:32:32.805068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.292 [2024-11-17 23:32:32.805612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.292 [2024-11-17 23:32:32.805635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:09.292 [2024-11-17 23:32:32.805649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.487 ms 00:27:09.292 [2024-11-17 23:32:32.805657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.292 [2024-11-17 23:32:32.805708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.292 [2024-11-17 23:32:32.805721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:09.292 [2024-11-17 23:32:32.805732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:27:09.292 [2024-11-17 23:32:32.805741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.292 [2024-11-17 23:32:32.814287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.292 [2024-11-17 23:32:32.814335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:09.292 [2024-11-17 23:32:32.814349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.519 ms 00:27:09.292 [2024-11-17 23:32:32.814358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.292 [2024-11-17 23:32:32.824177] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:09.292 [2024-11-17 23:32:32.825469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.292 [2024-11-17 23:32:32.825658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:09.292 [2024-11-17 23:32:32.825677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.035 ms 00:27:09.292 [2024-11-17 23:32:32.825688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.292 [2024-11-17 23:32:32.852073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.292 [2024-11-17 23:32:32.852146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:27:09.292 [2024-11-17 23:32:32.852166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.352 ms 00:27:09.292 [2024-11-17 23:32:32.852183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.292 [2024-11-17 23:32:32.852302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.293 [2024-11-17 23:32:32.852319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:09.293 [2024-11-17 23:32:32.852331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.061 ms 00:27:09.293 [2024-11-17 23:32:32.852344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.293 [2024-11-17 23:32:32.857468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.293 [2024-11-17 23:32:32.857656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:27:09.293 [2024-11-17 23:32:32.857682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.082 ms 00:27:09.293 [2024-11-17 23:32:32.857696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.293 [2024-11-17 23:32:32.862618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.293 [2024-11-17 23:32:32.862674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:27:09.293 [2024-11-17 23:32:32.862685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.879 ms 00:27:09.293 [2024-11-17 23:32:32.862695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.293 [2024-11-17 23:32:32.863065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.293 [2024-11-17 23:32:32.863081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:09.293 [2024-11-17 23:32:32.863090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.323 ms 00:27:09.293 [2024-11-17 23:32:32.863104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.293 [2024-11-17 23:32:32.905532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.293 [2024-11-17 23:32:32.905593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:27:09.293 [2024-11-17 23:32:32.905606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.404 ms 00:27:09.293 [2024-11-17 23:32:32.905626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.293 [2024-11-17 23:32:32.912502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.293 [2024-11-17 23:32:32.912585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:27:09.293 [2024-11-17 23:32:32.912602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.815 ms 00:27:09.293 [2024-11-17 23:32:32.912614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.293 [2024-11-17 23:32:32.918395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.293 [2024-11-17 23:32:32.918448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:27:09.293 [2024-11-17 23:32:32.918458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.736 ms 00:27:09.293 [2024-11-17 23:32:32.918468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.293 [2024-11-17 23:32:32.928148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.293 [2024-11-17 23:32:32.928280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:09.293 [2024-11-17 23:32:32.928315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.625 ms 00:27:09.293 [2024-11-17 23:32:32.928347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.293 [2024-11-17 23:32:32.928474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.293 [2024-11-17 23:32:32.928508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:09.293 [2024-11-17 23:32:32.928589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:27:09.293 [2024-11-17 23:32:32.928615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.293 [2024-11-17 23:32:32.928796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.293 [2024-11-17 23:32:32.928829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:09.293 [2024-11-17 23:32:32.928858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.095 ms 00:27:09.293 [2024-11-17 23:32:32.928932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.293 [2024-11-17 23:32:32.931524] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4058.868 ms, result 0 00:27:09.293 { 00:27:09.293 "name": "ftl", 00:27:09.293 "uuid": "0e11daf4-0220-4a96-85b9-19e494dd5570" 00:27:09.293 } 00:27:09.293 23:32:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:27:09.554 [2024-11-17 23:32:33.154646] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:09.554 23:32:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:27:09.814 23:32:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:27:09.814 [2024-11-17 23:32:33.567103] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:09.814 23:32:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:27:10.074 [2024-11-17 23:32:33.735348] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:10.074 23:32:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:10.334 Fill FTL, iteration 1 00:27:10.334 23:32:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:27:10.334 23:32:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:27:10.334 23:32:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:27:10.334 23:32:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:27:10.334 23:32:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:27:10.334 23:32:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:27:10.334 23:32:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:27:10.334 23:32:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:27:10.334 23:32:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:27:10.334 23:32:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:10.334 23:32:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:27:10.335 23:32:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:10.335 23:32:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:10.335 23:32:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:10.335 23:32:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:10.335 23:32:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:27:10.335 23:32:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=91667 00:27:10.335 23:32:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:27:10.335 23:32:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:27:10.335 23:32:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 91667 /var/tmp/spdk.tgt.sock 00:27:10.335 23:32:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 91667 ']' 00:27:10.335 23:32:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:27:10.335 23:32:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:10.335 23:32:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:27:10.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:27:10.335 23:32:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:10.335 23:32:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:10.596 [2024-11-17 23:32:34.168412] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:27:10.596 [2024-11-17 23:32:34.169138] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91667 ] 00:27:10.596 [2024-11-17 23:32:34.315596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.596 [2024-11-17 23:32:34.336337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:11.183 23:32:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:11.183 23:32:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:11.183 23:32:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:27:11.447 ftln1 00:27:11.447 23:32:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:27:11.447 23:32:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:27:11.708 23:32:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:27:11.708 23:32:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 91667 00:27:11.708 23:32:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 91667 ']' 00:27:11.708 23:32:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 91667 00:27:11.708 23:32:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:27:11.708 23:32:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:11.708 23:32:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91667 00:27:11.708 killing process with pid 91667 00:27:11.708 23:32:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:11.708 23:32:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:11.708 23:32:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91667' 00:27:11.708 23:32:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 91667 00:27:11.708 23:32:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 91667 00:27:11.969 23:32:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:27:11.969 23:32:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:12.230 [2024-11-17 23:32:35.853544] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:27:12.230 [2024-11-17 23:32:35.853698] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91703 ] 00:27:12.230 [2024-11-17 23:32:36.001128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.230 [2024-11-17 23:32:36.029550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:13.615  [2024-11-17T23:32:38.379Z] Copying: 179/1024 [MB] (179 MBps) [2024-11-17T23:32:39.326Z] Copying: 376/1024 [MB] (197 MBps) [2024-11-17T23:32:40.271Z] Copying: 617/1024 [MB] (241 MBps) [2024-11-17T23:32:41.212Z] Copying: 846/1024 [MB] (229 MBps) [2024-11-17T23:32:41.212Z] Copying: 1024/1024 [MB] (average 218 MBps) 00:27:17.391 00:27:17.391 Calculate MD5 checksum, iteration 1 00:27:17.391 23:32:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:27:17.391 23:32:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:27:17.391 23:32:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:17.391 23:32:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:17.391 23:32:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:17.391 23:32:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:17.391 23:32:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:17.391 23:32:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:17.391 [2024-11-17 23:32:41.132608] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:27:17.391 [2024-11-17 23:32:41.132732] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91758 ] 00:27:17.651 [2024-11-17 23:32:41.274625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.651 [2024-11-17 23:32:41.291256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:19.035  [2024-11-17T23:32:43.115Z] Copying: 646/1024 [MB] (646 MBps) [2024-11-17T23:32:43.376Z] Copying: 1024/1024 [MB] (average 628 MBps) 00:27:19.555 00:27:19.555 23:32:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:19.555 23:32:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:22.088 23:32:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:22.088 Fill FTL, iteration 2 00:27:22.088 23:32:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=e3ce9a44323da658fd8177ff07bc423c 00:27:22.088 23:32:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:22.088 23:32:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:22.088 23:32:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:27:22.088 23:32:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:22.088 23:32:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:22.088 23:32:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:22.088 23:32:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:22.088 23:32:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:22.088 23:32:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:22.088 [2024-11-17 23:32:45.378946] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:27:22.088 [2024-11-17 23:32:45.379188] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91810 ] 00:27:22.088 [2024-11-17 23:32:45.523624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.088 [2024-11-17 23:32:45.542780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:23.030  [2024-11-17T23:32:47.800Z] Copying: 182/1024 [MB] (182 MBps) [2024-11-17T23:32:48.743Z] Copying: 357/1024 [MB] (175 MBps) [2024-11-17T23:32:50.148Z] Copying: 594/1024 [MB] (237 MBps) [2024-11-17T23:32:50.408Z] Copying: 850/1024 [MB] (256 MBps) [2024-11-17T23:32:50.668Z] Copying: 1024/1024 [MB] (average 219 MBps) 00:27:26.847 00:27:26.847 Calculate MD5 checksum, iteration 2 00:27:26.847 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:27:26.847 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:27:26.847 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:26.847 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:26.847 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:26.847 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:26.847 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:26.847 23:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:26.847 [2024-11-17 23:32:50.607084] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:27:26.847 [2024-11-17 23:32:50.607333] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91863 ] 00:27:27.109 [2024-11-17 23:32:50.750769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:27.109 [2024-11-17 23:32:50.772333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:28.494  [2024-11-17T23:32:53.000Z] Copying: 634/1024 [MB] (634 MBps) [2024-11-17T23:32:53.569Z] Copying: 1024/1024 [MB] (average 635 MBps) 00:27:29.748 00:27:29.748 23:32:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:29.748 23:32:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:31.652 23:32:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:31.652 23:32:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=a6eb187a515eadd8c20489eb2df7855f 00:27:31.652 23:32:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:31.652 23:32:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:31.652 23:32:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:31.910 [2024-11-17 23:32:55.470985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.910 [2024-11-17 23:32:55.471030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:31.910 [2024-11-17 23:32:55.471042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:31.910 [2024-11-17 23:32:55.471049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.910 [2024-11-17 23:32:55.471070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.910 [2024-11-17 23:32:55.471077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:31.910 [2024-11-17 23:32:55.471085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:31.910 [2024-11-17 23:32:55.471091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.910 [2024-11-17 23:32:55.471107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.910 [2024-11-17 23:32:55.471114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:31.910 [2024-11-17 23:32:55.471121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:31.910 [2024-11-17 23:32:55.471126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.910 [2024-11-17 23:32:55.471185] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.191 ms, result 0 00:27:31.910 true 00:27:31.910 23:32:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:31.910 { 00:27:31.910 "name": "ftl", 00:27:31.910 "properties": [ 00:27:31.910 { 00:27:31.910 "name": "superblock_version", 00:27:31.910 "value": 5, 00:27:31.910 "read-only": true 00:27:31.910 }, 00:27:31.910 { 00:27:31.910 "name": "base_device", 00:27:31.910 "bands": [ 00:27:31.910 { 00:27:31.910 "id": 0, 00:27:31.910 "state": "FREE", 00:27:31.910 "validity": 0.0 00:27:31.910 }, 00:27:31.910 { 00:27:31.910 "id": 1, 00:27:31.910 "state": "FREE", 00:27:31.910 "validity": 0.0 00:27:31.910 }, 00:27:31.910 { 00:27:31.910 "id": 2, 00:27:31.910 "state": "FREE", 00:27:31.910 "validity": 0.0 00:27:31.910 }, 00:27:31.910 { 00:27:31.910 "id": 3, 00:27:31.910 "state": "FREE", 00:27:31.910 "validity": 0.0 00:27:31.910 }, 00:27:31.910 { 00:27:31.910 "id": 4, 00:27:31.910 "state": "FREE", 00:27:31.910 "validity": 0.0 00:27:31.910 }, 00:27:31.910 { 00:27:31.910 "id": 5, 00:27:31.910 "state": "FREE", 00:27:31.910 "validity": 0.0 00:27:31.910 }, 00:27:31.910 { 00:27:31.910 "id": 6, 00:27:31.910 "state": "FREE", 00:27:31.910 "validity": 0.0 00:27:31.910 }, 00:27:31.910 { 00:27:31.910 "id": 7, 00:27:31.910 "state": "FREE", 00:27:31.910 "validity": 0.0 00:27:31.910 }, 00:27:31.910 { 00:27:31.910 "id": 8, 00:27:31.910 "state": "FREE", 00:27:31.910 "validity": 0.0 00:27:31.910 }, 00:27:31.910 { 00:27:31.910 "id": 9, 00:27:31.910 "state": "FREE", 00:27:31.910 "validity": 0.0 00:27:31.910 }, 00:27:31.910 { 00:27:31.910 "id": 10, 00:27:31.910 "state": "FREE", 00:27:31.910 "validity": 0.0 00:27:31.910 }, 00:27:31.910 { 00:27:31.910 "id": 11, 00:27:31.910 "state": "FREE", 00:27:31.910 "validity": 0.0 00:27:31.910 }, 00:27:31.910 { 00:27:31.910 "id": 12, 00:27:31.910 "state": "FREE", 00:27:31.910 "validity": 0.0 00:27:31.910 }, 00:27:31.910 { 00:27:31.910 "id": 13, 00:27:31.910 "state": "FREE", 00:27:31.910 "validity": 0.0 00:27:31.910 }, 00:27:31.910 { 00:27:31.910 "id": 14, 00:27:31.910 "state": "FREE", 00:27:31.910 "validity": 0.0 00:27:31.910 }, 00:27:31.910 { 00:27:31.910 "id": 15, 00:27:31.910 "state": "FREE", 00:27:31.910 "validity": 0.0 00:27:31.910 }, 00:27:31.910 { 00:27:31.910 "id": 16, 00:27:31.910 "state": "FREE", 00:27:31.910 "validity": 0.0 00:27:31.910 }, 00:27:31.910 { 00:27:31.910 "id": 17, 00:27:31.911 "state": "FREE", 00:27:31.911 "validity": 0.0 00:27:31.911 } 00:27:31.911 ], 00:27:31.911 "read-only": true 00:27:31.911 }, 00:27:31.911 { 00:27:31.911 "name": "cache_device", 00:27:31.911 "type": "bdev", 00:27:31.911 "chunks": [ 00:27:31.911 { 00:27:31.911 "id": 0, 00:27:31.911 "state": "INACTIVE", 00:27:31.911 "utilization": 0.0 00:27:31.911 }, 00:27:31.911 { 00:27:31.911 "id": 1, 00:27:31.911 "state": "CLOSED", 00:27:31.911 "utilization": 1.0 00:27:31.911 }, 00:27:31.911 { 00:27:31.911 "id": 2, 00:27:31.911 "state": "CLOSED", 00:27:31.911 "utilization": 1.0 00:27:31.911 }, 00:27:31.911 { 00:27:31.911 "id": 3, 00:27:31.911 "state": "OPEN", 00:27:31.911 "utilization": 0.001953125 00:27:31.911 }, 00:27:31.911 { 00:27:31.911 "id": 4, 00:27:31.911 "state": "OPEN", 00:27:31.911 "utilization": 0.0 00:27:31.911 } 00:27:31.911 ], 00:27:31.911 "read-only": true 00:27:31.911 }, 00:27:31.911 { 00:27:31.911 "name": "verbose_mode", 00:27:31.911 "value": true, 00:27:31.911 "unit": "", 00:27:31.911 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:31.911 }, 00:27:31.911 { 00:27:31.911 "name": "prep_upgrade_on_shutdown", 00:27:31.911 "value": false, 00:27:31.911 "unit": "", 00:27:31.911 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:31.911 } 00:27:31.911 ] 00:27:31.911 } 00:27:31.911 23:32:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:27:32.169 [2024-11-17 23:32:55.863264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.169 [2024-11-17 23:32:55.863302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:32.169 [2024-11-17 23:32:55.863312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:32.169 [2024-11-17 23:32:55.863318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.169 [2024-11-17 23:32:55.863336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.169 [2024-11-17 23:32:55.863343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:32.169 [2024-11-17 23:32:55.863350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:32.169 [2024-11-17 23:32:55.863355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.169 [2024-11-17 23:32:55.863371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.169 [2024-11-17 23:32:55.863378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:32.169 [2024-11-17 23:32:55.863384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:32.169 [2024-11-17 23:32:55.863389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.169 [2024-11-17 23:32:55.863433] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.162 ms, result 0 00:27:32.169 true 00:27:32.169 23:32:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:27:32.169 23:32:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:32.169 23:32:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:32.427 23:32:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:27:32.427 23:32:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:27:32.427 23:32:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:32.684 [2024-11-17 23:32:56.267603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.684 [2024-11-17 23:32:56.267731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:32.684 [2024-11-17 23:32:56.267780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:32.684 [2024-11-17 23:32:56.267797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.684 [2024-11-17 23:32:56.267827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.684 [2024-11-17 23:32:56.267845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:32.684 [2024-11-17 23:32:56.267860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:32.684 [2024-11-17 23:32:56.267874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.684 [2024-11-17 23:32:56.267912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.684 [2024-11-17 23:32:56.267928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:32.684 [2024-11-17 23:32:56.267943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:32.684 [2024-11-17 23:32:56.267986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.684 [2024-11-17 23:32:56.268357] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.733 ms, result 0 00:27:32.684 true 00:27:32.684 23:32:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:32.684 { 00:27:32.684 "name": "ftl", 00:27:32.684 "properties": [ 00:27:32.684 { 00:27:32.684 "name": "superblock_version", 00:27:32.684 "value": 5, 00:27:32.684 "read-only": true 00:27:32.684 }, 00:27:32.684 { 00:27:32.684 "name": "base_device", 00:27:32.684 "bands": [ 00:27:32.684 { 00:27:32.684 "id": 0, 00:27:32.685 "state": "FREE", 00:27:32.685 "validity": 0.0 00:27:32.685 }, 00:27:32.685 { 00:27:32.685 "id": 1, 00:27:32.685 "state": "FREE", 00:27:32.685 "validity": 0.0 00:27:32.685 }, 00:27:32.685 { 00:27:32.685 "id": 2, 00:27:32.685 "state": "FREE", 00:27:32.685 "validity": 0.0 00:27:32.685 }, 00:27:32.685 { 00:27:32.685 "id": 3, 00:27:32.685 "state": "FREE", 00:27:32.685 "validity": 0.0 00:27:32.685 }, 00:27:32.685 { 00:27:32.685 "id": 4, 00:27:32.685 "state": "FREE", 00:27:32.685 "validity": 0.0 00:27:32.685 }, 00:27:32.685 { 00:27:32.685 "id": 5, 00:27:32.685 "state": "FREE", 00:27:32.685 "validity": 0.0 00:27:32.685 }, 00:27:32.685 { 00:27:32.685 "id": 6, 00:27:32.685 "state": "FREE", 00:27:32.685 "validity": 0.0 00:27:32.685 }, 00:27:32.685 { 00:27:32.685 "id": 7, 00:27:32.685 "state": "FREE", 00:27:32.685 "validity": 0.0 00:27:32.685 }, 00:27:32.685 { 00:27:32.685 "id": 8, 00:27:32.685 "state": "FREE", 00:27:32.685 "validity": 0.0 00:27:32.685 }, 00:27:32.685 { 00:27:32.685 "id": 9, 00:27:32.685 "state": "FREE", 00:27:32.685 "validity": 0.0 00:27:32.685 }, 00:27:32.685 { 00:27:32.685 "id": 10, 00:27:32.685 "state": "FREE", 00:27:32.685 "validity": 0.0 00:27:32.685 }, 00:27:32.685 { 00:27:32.685 "id": 11, 00:27:32.685 "state": "FREE", 00:27:32.685 "validity": 0.0 00:27:32.685 }, 00:27:32.685 { 00:27:32.685 "id": 12, 00:27:32.685 "state": "FREE", 00:27:32.685 "validity": 0.0 00:27:32.685 }, 00:27:32.685 { 00:27:32.685 "id": 13, 00:27:32.685 "state": "FREE", 00:27:32.685 "validity": 0.0 00:27:32.685 }, 00:27:32.685 { 00:27:32.685 "id": 14, 00:27:32.685 "state": "FREE", 00:27:32.685 "validity": 0.0 00:27:32.685 }, 00:27:32.685 { 00:27:32.685 "id": 15, 00:27:32.685 "state": "FREE", 00:27:32.685 "validity": 0.0 00:27:32.685 }, 00:27:32.685 { 00:27:32.685 "id": 16, 00:27:32.685 "state": "FREE", 00:27:32.685 "validity": 0.0 00:27:32.685 }, 00:27:32.685 { 00:27:32.685 "id": 17, 00:27:32.685 "state": "FREE", 00:27:32.685 "validity": 0.0 00:27:32.685 } 00:27:32.685 ], 00:27:32.685 "read-only": true 00:27:32.685 }, 00:27:32.685 { 00:27:32.685 "name": "cache_device", 00:27:32.685 "type": "bdev", 00:27:32.685 "chunks": [ 00:27:32.685 { 00:27:32.685 "id": 0, 00:27:32.685 "state": "INACTIVE", 00:27:32.685 "utilization": 0.0 00:27:32.685 }, 00:27:32.685 { 00:27:32.685 "id": 1, 00:27:32.685 "state": "CLOSED", 00:27:32.685 "utilization": 1.0 00:27:32.685 }, 00:27:32.685 { 00:27:32.685 "id": 2, 00:27:32.685 "state": "CLOSED", 00:27:32.685 "utilization": 1.0 00:27:32.685 }, 00:27:32.685 { 00:27:32.685 "id": 3, 00:27:32.685 "state": "OPEN", 00:27:32.685 "utilization": 0.001953125 00:27:32.685 }, 00:27:32.685 { 00:27:32.685 "id": 4, 00:27:32.685 "state": "OPEN", 00:27:32.685 "utilization": 0.0 00:27:32.685 } 00:27:32.685 ], 00:27:32.685 "read-only": true 00:27:32.685 }, 00:27:32.685 { 00:27:32.685 "name": "verbose_mode", 00:27:32.685 "value": true, 00:27:32.685 "unit": "", 00:27:32.685 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:32.685 }, 00:27:32.685 { 00:27:32.685 "name": "prep_upgrade_on_shutdown", 00:27:32.685 "value": true, 00:27:32.685 "unit": "", 00:27:32.685 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:32.685 } 00:27:32.685 ] 00:27:32.685 } 00:27:32.685 23:32:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:27:32.685 23:32:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 91545 ]] 00:27:32.685 23:32:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 91545 00:27:32.685 23:32:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 91545 ']' 00:27:32.685 23:32:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 91545 00:27:32.685 23:32:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:27:32.946 23:32:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:32.946 23:32:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91545 00:27:32.946 killing process with pid 91545 00:27:32.946 23:32:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:32.946 23:32:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:32.946 23:32:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91545' 00:27:32.946 23:32:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 91545 00:27:32.946 23:32:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 91545 00:27:32.946 [2024-11-17 23:32:56.635252] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:32.946 [2024-11-17 23:32:56.639236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.946 [2024-11-17 23:32:56.639268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:32.946 [2024-11-17 23:32:56.639280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:32.946 [2024-11-17 23:32:56.639286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.946 [2024-11-17 23:32:56.639304] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:32.946 [2024-11-17 23:32:56.639818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.946 [2024-11-17 23:32:56.639843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:32.946 [2024-11-17 23:32:56.639852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.504 ms 00:27:32.946 [2024-11-17 23:32:56.639862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.095 [2024-11-17 23:33:04.661620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.095 [2024-11-17 23:33:04.661686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:41.095 [2024-11-17 23:33:04.661700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8021.698 ms 00:27:41.095 [2024-11-17 23:33:04.661707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.095 [2024-11-17 23:33:04.663297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.095 [2024-11-17 23:33:04.663332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:41.095 [2024-11-17 23:33:04.663341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.576 ms 00:27:41.095 [2024-11-17 23:33:04.663348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.095 [2024-11-17 23:33:04.664223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.095 [2024-11-17 23:33:04.664366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:41.095 [2024-11-17 23:33:04.664385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.846 ms 00:27:41.095 [2024-11-17 23:33:04.664393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.095 [2024-11-17 23:33:04.666340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.095 [2024-11-17 23:33:04.666369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:41.095 [2024-11-17 23:33:04.666377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.899 ms 00:27:41.095 [2024-11-17 23:33:04.666383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.095 [2024-11-17 23:33:04.668463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.095 [2024-11-17 23:33:04.668491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:41.095 [2024-11-17 23:33:04.668500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.053 ms 00:27:41.095 [2024-11-17 23:33:04.668513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.095 [2024-11-17 23:33:04.668585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.095 [2024-11-17 23:33:04.668594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:41.095 [2024-11-17 23:33:04.668601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:27:41.095 [2024-11-17 23:33:04.668607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.095 [2024-11-17 23:33:04.669949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.095 [2024-11-17 23:33:04.669975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:27:41.095 [2024-11-17 23:33:04.669982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.329 ms 00:27:41.095 [2024-11-17 23:33:04.669987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.095 [2024-11-17 23:33:04.671235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.096 [2024-11-17 23:33:04.671334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:27:41.096 [2024-11-17 23:33:04.671346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.224 ms 00:27:41.096 [2024-11-17 23:33:04.671352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.096 [2024-11-17 23:33:04.672621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.096 [2024-11-17 23:33:04.672648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:41.096 [2024-11-17 23:33:04.672655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.247 ms 00:27:41.096 [2024-11-17 23:33:04.672660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.096 [2024-11-17 23:33:04.673743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.096 [2024-11-17 23:33:04.673769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:41.096 [2024-11-17 23:33:04.673776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.036 ms 00:27:41.096 [2024-11-17 23:33:04.673781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.096 [2024-11-17 23:33:04.673804] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:41.096 [2024-11-17 23:33:04.673815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:41.096 [2024-11-17 23:33:04.673823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:41.096 [2024-11-17 23:33:04.673830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:41.096 [2024-11-17 23:33:04.673837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:41.096 [2024-11-17 23:33:04.673843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:41.096 [2024-11-17 23:33:04.673850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:41.096 [2024-11-17 23:33:04.673856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:41.096 [2024-11-17 23:33:04.673862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:41.096 [2024-11-17 23:33:04.673868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:41.096 [2024-11-17 23:33:04.673874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:41.096 [2024-11-17 23:33:04.673893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:41.096 [2024-11-17 23:33:04.673899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:41.096 [2024-11-17 23:33:04.673905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:41.096 [2024-11-17 23:33:04.673911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:41.096 [2024-11-17 23:33:04.673917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:41.096 [2024-11-17 23:33:04.673923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:41.096 [2024-11-17 23:33:04.673929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:41.096 [2024-11-17 23:33:04.673935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:41.096 [2024-11-17 23:33:04.673942] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:41.096 [2024-11-17 23:33:04.673949] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 0e11daf4-0220-4a96-85b9-19e494dd5570 00:27:41.096 [2024-11-17 23:33:04.673956] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:41.096 [2024-11-17 23:33:04.673961] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:41.096 [2024-11-17 23:33:04.673968] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:41.096 [2024-11-17 23:33:04.673978] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:41.096 [2024-11-17 23:33:04.673984] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:41.096 [2024-11-17 23:33:04.673990] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:41.096 [2024-11-17 23:33:04.673999] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:41.096 [2024-11-17 23:33:04.674005] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:41.096 [2024-11-17 23:33:04.674010] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:41.096 [2024-11-17 23:33:04.674017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.096 [2024-11-17 23:33:04.674023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:41.096 [2024-11-17 23:33:04.674030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.214 ms 00:27:41.096 [2024-11-17 23:33:04.674036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.096 [2024-11-17 23:33:04.675865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.096 [2024-11-17 23:33:04.675963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:41.096 [2024-11-17 23:33:04.676008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.817 ms 00:27:41.096 [2024-11-17 23:33:04.676026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.096 [2024-11-17 23:33:04.676120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.096 [2024-11-17 23:33:04.676139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:41.096 [2024-11-17 23:33:04.676216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.071 ms 00:27:41.096 [2024-11-17 23:33:04.676235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.096 [2024-11-17 23:33:04.682260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:41.096 [2024-11-17 23:33:04.682347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:41.096 [2024-11-17 23:33:04.682876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:41.096 [2024-11-17 23:33:04.682962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.096 [2024-11-17 23:33:04.683007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:41.096 [2024-11-17 23:33:04.683046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:41.096 [2024-11-17 23:33:04.683065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:41.096 [2024-11-17 23:33:04.683161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.096 [2024-11-17 23:33:04.683228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:41.096 [2024-11-17 23:33:04.683254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:41.096 [2024-11-17 23:33:04.683269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:41.096 [2024-11-17 23:33:04.683284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.096 [2024-11-17 23:33:04.683306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:41.096 [2024-11-17 23:33:04.683321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:41.096 [2024-11-17 23:33:04.683336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:41.096 [2024-11-17 23:33:04.683353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.096 [2024-11-17 23:33:04.694497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:41.096 [2024-11-17 23:33:04.694620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:41.096 [2024-11-17 23:33:04.694632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:41.096 [2024-11-17 23:33:04.694645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.096 [2024-11-17 23:33:04.703261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:41.096 [2024-11-17 23:33:04.703294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:41.096 [2024-11-17 23:33:04.703303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:41.096 [2024-11-17 23:33:04.703310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.096 [2024-11-17 23:33:04.703380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:41.096 [2024-11-17 23:33:04.703388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:41.096 [2024-11-17 23:33:04.703400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:41.096 [2024-11-17 23:33:04.703407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.096 [2024-11-17 23:33:04.703433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:41.096 [2024-11-17 23:33:04.703445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:41.096 [2024-11-17 23:33:04.703452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:41.096 [2024-11-17 23:33:04.703459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.096 [2024-11-17 23:33:04.703520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:41.096 [2024-11-17 23:33:04.703528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:41.097 [2024-11-17 23:33:04.703535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:41.097 [2024-11-17 23:33:04.703544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.097 [2024-11-17 23:33:04.703571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:41.097 [2024-11-17 23:33:04.703579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:41.097 [2024-11-17 23:33:04.703585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:41.097 [2024-11-17 23:33:04.703595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.097 [2024-11-17 23:33:04.703632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:41.097 [2024-11-17 23:33:04.703641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:41.097 [2024-11-17 23:33:04.703647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:41.097 [2024-11-17 23:33:04.703654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.097 [2024-11-17 23:33:04.703697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:41.097 [2024-11-17 23:33:04.703706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:41.097 [2024-11-17 23:33:04.703713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:41.097 [2024-11-17 23:33:04.703720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.097 [2024-11-17 23:33:04.703841] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8064.537 ms, result 0 00:27:46.390 23:33:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:46.390 23:33:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:46.390 23:33:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:46.390 23:33:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:46.390 23:33:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:46.390 23:33:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=92041 00:27:46.390 23:33:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:46.390 23:33:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 92041 00:27:46.390 23:33:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 92041 ']' 00:27:46.390 23:33:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:46.390 23:33:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:46.390 23:33:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:46.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:46.390 23:33:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:46.390 23:33:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:46.390 23:33:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:46.390 [2024-11-17 23:33:09.865130] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:27:46.390 [2024-11-17 23:33:09.865270] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92041 ] 00:27:46.390 [2024-11-17 23:33:10.014387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:46.390 [2024-11-17 23:33:10.045460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.652 [2024-11-17 23:33:10.360979] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:46.652 [2024-11-17 23:33:10.361053] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:46.913 [2024-11-17 23:33:10.509481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.913 [2024-11-17 23:33:10.509529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:46.913 [2024-11-17 23:33:10.509542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:46.913 [2024-11-17 23:33:10.509556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.913 [2024-11-17 23:33:10.509606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.913 [2024-11-17 23:33:10.509616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:46.913 [2024-11-17 23:33:10.509627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:27:46.913 [2024-11-17 23:33:10.509634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.913 [2024-11-17 23:33:10.509655] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:46.913 [2024-11-17 23:33:10.509918] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:46.913 [2024-11-17 23:33:10.509938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.913 [2024-11-17 23:33:10.509945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:46.913 [2024-11-17 23:33:10.509957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.287 ms 00:27:46.913 [2024-11-17 23:33:10.509965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.913 [2024-11-17 23:33:10.511150] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:46.913 [2024-11-17 23:33:10.514002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.913 [2024-11-17 23:33:10.514040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:46.913 [2024-11-17 23:33:10.514060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.854 ms 00:27:46.913 [2024-11-17 23:33:10.514067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.913 [2024-11-17 23:33:10.514124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.913 [2024-11-17 23:33:10.514133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:46.913 [2024-11-17 23:33:10.514141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:27:46.913 [2024-11-17 23:33:10.514148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.913 [2024-11-17 23:33:10.519461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.913 [2024-11-17 23:33:10.519492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:46.913 [2024-11-17 23:33:10.519502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.249 ms 00:27:46.913 [2024-11-17 23:33:10.519509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.913 [2024-11-17 23:33:10.519551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.913 [2024-11-17 23:33:10.519560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:46.913 [2024-11-17 23:33:10.519568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:27:46.913 [2024-11-17 23:33:10.519575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.913 [2024-11-17 23:33:10.519620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.913 [2024-11-17 23:33:10.519632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:46.913 [2024-11-17 23:33:10.519639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:46.913 [2024-11-17 23:33:10.519648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.913 [2024-11-17 23:33:10.519673] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:46.913 [2024-11-17 23:33:10.521159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.913 [2024-11-17 23:33:10.521295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:46.913 [2024-11-17 23:33:10.521310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.493 ms 00:27:46.913 [2024-11-17 23:33:10.521318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.913 [2024-11-17 23:33:10.521349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.913 [2024-11-17 23:33:10.521363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:46.913 [2024-11-17 23:33:10.521371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:46.913 [2024-11-17 23:33:10.521379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.913 [2024-11-17 23:33:10.521399] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:46.913 [2024-11-17 23:33:10.521423] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:46.913 [2024-11-17 23:33:10.521457] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:46.913 [2024-11-17 23:33:10.521471] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:27:46.913 [2024-11-17 23:33:10.521576] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:46.913 [2024-11-17 23:33:10.521589] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:46.913 [2024-11-17 23:33:10.521600] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:46.913 [2024-11-17 23:33:10.521611] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:46.913 [2024-11-17 23:33:10.521620] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:46.913 [2024-11-17 23:33:10.521627] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:46.913 [2024-11-17 23:33:10.521639] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:46.913 [2024-11-17 23:33:10.521646] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:46.913 [2024-11-17 23:33:10.521654] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:46.913 [2024-11-17 23:33:10.521663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.913 [2024-11-17 23:33:10.521671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:46.913 [2024-11-17 23:33:10.521681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.267 ms 00:27:46.913 [2024-11-17 23:33:10.521688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.913 [2024-11-17 23:33:10.521773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.913 [2024-11-17 23:33:10.521782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:46.913 [2024-11-17 23:33:10.521789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:27:46.913 [2024-11-17 23:33:10.521796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.913 [2024-11-17 23:33:10.521929] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:46.913 [2024-11-17 23:33:10.521941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:46.913 [2024-11-17 23:33:10.521951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:46.913 [2024-11-17 23:33:10.521962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.913 [2024-11-17 23:33:10.521971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:46.913 [2024-11-17 23:33:10.521979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:46.913 [2024-11-17 23:33:10.521987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:46.913 [2024-11-17 23:33:10.521994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:46.913 [2024-11-17 23:33:10.522003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:46.913 [2024-11-17 23:33:10.522011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.913 [2024-11-17 23:33:10.522019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:46.913 [2024-11-17 23:33:10.522027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:46.913 [2024-11-17 23:33:10.522034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.913 [2024-11-17 23:33:10.522042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:46.913 [2024-11-17 23:33:10.522050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:46.914 [2024-11-17 23:33:10.522058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.914 [2024-11-17 23:33:10.522070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:46.914 [2024-11-17 23:33:10.522078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:46.914 [2024-11-17 23:33:10.522088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.914 [2024-11-17 23:33:10.522097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:46.914 [2024-11-17 23:33:10.522105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:46.914 [2024-11-17 23:33:10.522112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:46.914 [2024-11-17 23:33:10.522120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:46.914 [2024-11-17 23:33:10.522127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:46.914 [2024-11-17 23:33:10.522134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:46.914 [2024-11-17 23:33:10.522143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:46.914 [2024-11-17 23:33:10.522150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:46.914 [2024-11-17 23:33:10.522157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:46.914 [2024-11-17 23:33:10.522164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:46.914 [2024-11-17 23:33:10.522172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:46.914 [2024-11-17 23:33:10.522179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:46.914 [2024-11-17 23:33:10.522187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:46.914 [2024-11-17 23:33:10.522195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:46.914 [2024-11-17 23:33:10.522203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.914 [2024-11-17 23:33:10.522211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:46.914 [2024-11-17 23:33:10.522219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:46.914 [2024-11-17 23:33:10.522226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.914 [2024-11-17 23:33:10.522234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:46.914 [2024-11-17 23:33:10.522241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:46.914 [2024-11-17 23:33:10.522249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.914 [2024-11-17 23:33:10.522256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:46.914 [2024-11-17 23:33:10.522264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:46.914 [2024-11-17 23:33:10.522271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.914 [2024-11-17 23:33:10.522279] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:46.914 [2024-11-17 23:33:10.522296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:46.914 [2024-11-17 23:33:10.522303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:46.914 [2024-11-17 23:33:10.522310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.914 [2024-11-17 23:33:10.522319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:46.914 [2024-11-17 23:33:10.522328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:46.914 [2024-11-17 23:33:10.522336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:46.914 [2024-11-17 23:33:10.522346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:46.914 [2024-11-17 23:33:10.522352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:46.914 [2024-11-17 23:33:10.522359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:46.914 [2024-11-17 23:33:10.522367] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:46.914 [2024-11-17 23:33:10.522375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:46.914 [2024-11-17 23:33:10.522384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:46.914 [2024-11-17 23:33:10.522391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:46.914 [2024-11-17 23:33:10.522399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:46.914 [2024-11-17 23:33:10.522406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:46.914 [2024-11-17 23:33:10.522413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:46.914 [2024-11-17 23:33:10.522420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:46.914 [2024-11-17 23:33:10.522427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:46.914 [2024-11-17 23:33:10.522434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:46.914 [2024-11-17 23:33:10.522442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:46.914 [2024-11-17 23:33:10.522451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:46.914 [2024-11-17 23:33:10.522458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:46.914 [2024-11-17 23:33:10.522466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:46.914 [2024-11-17 23:33:10.522473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:46.914 [2024-11-17 23:33:10.522480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:46.914 [2024-11-17 23:33:10.522487] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:46.914 [2024-11-17 23:33:10.522495] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:46.914 [2024-11-17 23:33:10.522506] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:46.914 [2024-11-17 23:33:10.522514] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:46.914 [2024-11-17 23:33:10.522521] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:46.914 [2024-11-17 23:33:10.522528] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:46.914 [2024-11-17 23:33:10.522536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.914 [2024-11-17 23:33:10.522548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:46.914 [2024-11-17 23:33:10.522557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.705 ms 00:27:46.914 [2024-11-17 23:33:10.522564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.914 [2024-11-17 23:33:10.522603] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:46.914 [2024-11-17 23:33:10.522615] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:51.121 [2024-11-17 23:33:14.355373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.121 [2024-11-17 23:33:14.355609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:51.121 [2024-11-17 23:33:14.355704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3832.754 ms 00:27:51.121 [2024-11-17 23:33:14.355730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.121 [2024-11-17 23:33:14.364479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.121 [2024-11-17 23:33:14.364638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:51.121 [2024-11-17 23:33:14.364696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.648 ms 00:27:51.121 [2024-11-17 23:33:14.364720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.121 [2024-11-17 23:33:14.364777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.121 [2024-11-17 23:33:14.364799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:51.121 [2024-11-17 23:33:14.364820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:51.121 [2024-11-17 23:33:14.364839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.121 [2024-11-17 23:33:14.374021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.121 [2024-11-17 23:33:14.374162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:51.121 [2024-11-17 23:33:14.374216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.090 ms 00:27:51.121 [2024-11-17 23:33:14.374238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.121 [2024-11-17 23:33:14.374283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.121 [2024-11-17 23:33:14.374304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:51.121 [2024-11-17 23:33:14.374324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:51.121 [2024-11-17 23:33:14.374348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.121 [2024-11-17 23:33:14.374728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.121 [2024-11-17 23:33:14.374771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:51.121 [2024-11-17 23:33:14.374792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.334 ms 00:27:51.121 [2024-11-17 23:33:14.374812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.121 [2024-11-17 23:33:14.374867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.121 [2024-11-17 23:33:14.374976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:51.121 [2024-11-17 23:33:14.375003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:51.121 [2024-11-17 23:33:14.375024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.121 [2024-11-17 23:33:14.381228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.121 [2024-11-17 23:33:14.381351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:51.121 [2024-11-17 23:33:14.381404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.147 ms 00:27:51.121 [2024-11-17 23:33:14.381427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.121 [2024-11-17 23:33:14.384265] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:51.121 [2024-11-17 23:33:14.384402] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:51.121 [2024-11-17 23:33:14.384478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.121 [2024-11-17 23:33:14.384499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:27:51.121 [2024-11-17 23:33:14.384520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.937 ms 00:27:51.121 [2024-11-17 23:33:14.384548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.121 [2024-11-17 23:33:14.388477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.121 [2024-11-17 23:33:14.388605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:27:51.121 [2024-11-17 23:33:14.388656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.883 ms 00:27:51.121 [2024-11-17 23:33:14.388678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.121 [2024-11-17 23:33:14.390471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.121 [2024-11-17 23:33:14.390576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:27:51.121 [2024-11-17 23:33:14.390622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.727 ms 00:27:51.121 [2024-11-17 23:33:14.390644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.121 [2024-11-17 23:33:14.392649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.121 [2024-11-17 23:33:14.392758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:27:51.121 [2024-11-17 23:33:14.392805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.668 ms 00:27:51.121 [2024-11-17 23:33:14.392826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.121 [2024-11-17 23:33:14.393185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.121 [2024-11-17 23:33:14.393331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:51.121 [2024-11-17 23:33:14.393378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.248 ms 00:27:51.121 [2024-11-17 23:33:14.393401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.121 [2024-11-17 23:33:14.421814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.121 [2024-11-17 23:33:14.422016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:51.121 [2024-11-17 23:33:14.422077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.378 ms 00:27:51.121 [2024-11-17 23:33:14.422102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.121 [2024-11-17 23:33:14.429730] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:51.121 [2024-11-17 23:33:14.430543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.121 [2024-11-17 23:33:14.430652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:51.121 [2024-11-17 23:33:14.430701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.392 ms 00:27:51.121 [2024-11-17 23:33:14.430726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.121 [2024-11-17 23:33:14.430812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.121 [2024-11-17 23:33:14.430841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:27:51.121 [2024-11-17 23:33:14.430863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:27:51.121 [2024-11-17 23:33:14.430904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.121 [2024-11-17 23:33:14.430966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.121 [2024-11-17 23:33:14.431066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:51.121 [2024-11-17 23:33:14.431088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:27:51.121 [2024-11-17 23:33:14.431111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.121 [2024-11-17 23:33:14.431148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.121 [2024-11-17 23:33:14.431170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:51.121 [2024-11-17 23:33:14.431229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:51.121 [2024-11-17 23:33:14.431253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.121 [2024-11-17 23:33:14.431300] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:51.122 [2024-11-17 23:33:14.431328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.122 [2024-11-17 23:33:14.431396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:51.122 [2024-11-17 23:33:14.431419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:27:51.122 [2024-11-17 23:33:14.431443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.122 [2024-11-17 23:33:14.435384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.122 [2024-11-17 23:33:14.435504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:51.122 [2024-11-17 23:33:14.435558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.899 ms 00:27:51.122 [2024-11-17 23:33:14.435684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.122 [2024-11-17 23:33:14.435787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.122 [2024-11-17 23:33:14.435851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:51.122 [2024-11-17 23:33:14.435875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:27:51.122 [2024-11-17 23:33:14.435910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.122 [2024-11-17 23:33:14.436972] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3927.064 ms, result 0 00:27:51.122 [2024-11-17 23:33:14.452032] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:51.122 [2024-11-17 23:33:14.468036] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:51.122 [2024-11-17 23:33:14.476150] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:51.122 23:33:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:51.122 23:33:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:51.122 23:33:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:51.122 23:33:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:51.122 23:33:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:51.122 [2024-11-17 23:33:14.728371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.122 [2024-11-17 23:33:14.728422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:51.122 [2024-11-17 23:33:14.728440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:51.122 [2024-11-17 23:33:14.728449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.122 [2024-11-17 23:33:14.728474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.122 [2024-11-17 23:33:14.728484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:51.122 [2024-11-17 23:33:14.728496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:51.122 [2024-11-17 23:33:14.728507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.122 [2024-11-17 23:33:14.728528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.122 [2024-11-17 23:33:14.728552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:51.122 [2024-11-17 23:33:14.728560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:51.122 [2024-11-17 23:33:14.728572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.122 [2024-11-17 23:33:14.728633] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.253 ms, result 0 00:27:51.122 true 00:27:51.122 23:33:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:51.384 { 00:27:51.384 "name": "ftl", 00:27:51.384 "properties": [ 00:27:51.384 { 00:27:51.384 "name": "superblock_version", 00:27:51.384 "value": 5, 00:27:51.384 "read-only": true 00:27:51.384 }, 00:27:51.384 { 00:27:51.384 "name": "base_device", 00:27:51.384 "bands": [ 00:27:51.384 { 00:27:51.384 "id": 0, 00:27:51.384 "state": "CLOSED", 00:27:51.384 "validity": 1.0 00:27:51.384 }, 00:27:51.384 { 00:27:51.384 "id": 1, 00:27:51.384 "state": "CLOSED", 00:27:51.384 "validity": 1.0 00:27:51.384 }, 00:27:51.384 { 00:27:51.384 "id": 2, 00:27:51.384 "state": "CLOSED", 00:27:51.384 "validity": 0.007843137254901933 00:27:51.384 }, 00:27:51.384 { 00:27:51.384 "id": 3, 00:27:51.384 "state": "FREE", 00:27:51.384 "validity": 0.0 00:27:51.384 }, 00:27:51.384 { 00:27:51.384 "id": 4, 00:27:51.384 "state": "FREE", 00:27:51.384 "validity": 0.0 00:27:51.384 }, 00:27:51.384 { 00:27:51.384 "id": 5, 00:27:51.384 "state": "FREE", 00:27:51.384 "validity": 0.0 00:27:51.384 }, 00:27:51.384 { 00:27:51.384 "id": 6, 00:27:51.384 "state": "FREE", 00:27:51.384 "validity": 0.0 00:27:51.384 }, 00:27:51.384 { 00:27:51.384 "id": 7, 00:27:51.384 "state": "FREE", 00:27:51.384 "validity": 0.0 00:27:51.384 }, 00:27:51.384 { 00:27:51.384 "id": 8, 00:27:51.384 "state": "FREE", 00:27:51.384 "validity": 0.0 00:27:51.384 }, 00:27:51.384 { 00:27:51.384 "id": 9, 00:27:51.384 "state": "FREE", 00:27:51.384 "validity": 0.0 00:27:51.384 }, 00:27:51.384 { 00:27:51.384 "id": 10, 00:27:51.384 "state": "FREE", 00:27:51.384 "validity": 0.0 00:27:51.384 }, 00:27:51.384 { 00:27:51.384 "id": 11, 00:27:51.384 "state": "FREE", 00:27:51.384 "validity": 0.0 00:27:51.384 }, 00:27:51.384 { 00:27:51.384 "id": 12, 00:27:51.384 "state": "FREE", 00:27:51.384 "validity": 0.0 00:27:51.384 }, 00:27:51.384 { 00:27:51.384 "id": 13, 00:27:51.384 "state": "FREE", 00:27:51.384 "validity": 0.0 00:27:51.384 }, 00:27:51.384 { 00:27:51.384 "id": 14, 00:27:51.384 "state": "FREE", 00:27:51.384 "validity": 0.0 00:27:51.384 }, 00:27:51.384 { 00:27:51.384 "id": 15, 00:27:51.384 "state": "FREE", 00:27:51.384 "validity": 0.0 00:27:51.384 }, 00:27:51.384 { 00:27:51.384 "id": 16, 00:27:51.384 "state": "FREE", 00:27:51.384 "validity": 0.0 00:27:51.384 }, 00:27:51.384 { 00:27:51.384 "id": 17, 00:27:51.384 "state": "FREE", 00:27:51.384 "validity": 0.0 00:27:51.384 } 00:27:51.384 ], 00:27:51.384 "read-only": true 00:27:51.384 }, 00:27:51.384 { 00:27:51.384 "name": "cache_device", 00:27:51.384 "type": "bdev", 00:27:51.384 "chunks": [ 00:27:51.384 { 00:27:51.384 "id": 0, 00:27:51.384 "state": "INACTIVE", 00:27:51.384 "utilization": 0.0 00:27:51.384 }, 00:27:51.384 { 00:27:51.384 "id": 1, 00:27:51.384 "state": "OPEN", 00:27:51.384 "utilization": 0.0 00:27:51.384 }, 00:27:51.384 { 00:27:51.384 "id": 2, 00:27:51.384 "state": "OPEN", 00:27:51.384 "utilization": 0.0 00:27:51.384 }, 00:27:51.384 { 00:27:51.384 "id": 3, 00:27:51.384 "state": "FREE", 00:27:51.384 "utilization": 0.0 00:27:51.384 }, 00:27:51.384 { 00:27:51.384 "id": 4, 00:27:51.384 "state": "FREE", 00:27:51.384 "utilization": 0.0 00:27:51.384 } 00:27:51.384 ], 00:27:51.384 "read-only": true 00:27:51.384 }, 00:27:51.384 { 00:27:51.384 "name": "verbose_mode", 00:27:51.384 "value": true, 00:27:51.384 "unit": "", 00:27:51.384 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:51.384 }, 00:27:51.384 { 00:27:51.384 "name": "prep_upgrade_on_shutdown", 00:27:51.384 "value": false, 00:27:51.384 "unit": "", 00:27:51.384 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:51.384 } 00:27:51.384 ] 00:27:51.385 } 00:27:51.385 23:33:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:27:51.385 23:33:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:51.385 23:33:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:51.385 23:33:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:27:51.385 23:33:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:27:51.385 23:33:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:27:51.385 23:33:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:51.385 23:33:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:27:51.646 Validate MD5 checksum, iteration 1 00:27:51.646 23:33:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:27:51.646 23:33:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:27:51.646 23:33:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:27:51.646 23:33:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:51.646 23:33:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:51.646 23:33:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:51.646 23:33:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:51.646 23:33:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:51.646 23:33:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:51.646 23:33:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:51.646 23:33:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:51.646 23:33:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:51.646 23:33:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:51.906 [2024-11-17 23:33:15.482636] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:27:51.906 [2024-11-17 23:33:15.482804] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92110 ] 00:27:51.906 [2024-11-17 23:33:15.631326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.906 [2024-11-17 23:33:15.660064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:53.288  [2024-11-17T23:33:17.680Z] Copying: 664/1024 [MB] (664 MBps) [2024-11-17T23:33:19.066Z] Copying: 1024/1024 [MB] (average 663 MBps) 00:27:55.245 00:27:55.245 23:33:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:55.245 23:33:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:57.792 23:33:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:57.792 Validate MD5 checksum, iteration 2 00:27:57.792 23:33:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=e3ce9a44323da658fd8177ff07bc423c 00:27:57.792 23:33:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ e3ce9a44323da658fd8177ff07bc423c != \e\3\c\e\9\a\4\4\3\2\3\d\a\6\5\8\f\d\8\1\7\7\f\f\0\7\b\c\4\2\3\c ]] 00:27:57.792 23:33:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:57.792 23:33:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:57.792 23:33:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:57.792 23:33:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:57.792 23:33:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:57.792 23:33:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:57.792 23:33:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:57.792 23:33:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:57.792 23:33:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:57.792 [2024-11-17 23:33:21.100741] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:27:57.792 [2024-11-17 23:33:21.100853] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92178 ] 00:27:57.792 [2024-11-17 23:33:21.244318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:57.792 [2024-11-17 23:33:21.262188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:59.175  [2024-11-17T23:33:23.568Z] Copying: 596/1024 [MB] (596 MBps) [2024-11-17T23:33:25.479Z] Copying: 1024/1024 [MB] (average 599 MBps) 00:28:01.658 00:28:01.658 23:33:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:01.658 23:33:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:03.571 23:33:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:03.571 23:33:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=a6eb187a515eadd8c20489eb2df7855f 00:28:03.571 23:33:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ a6eb187a515eadd8c20489eb2df7855f != \a\6\e\b\1\8\7\a\5\1\5\e\a\d\d\8\c\2\0\4\8\9\e\b\2\d\f\7\8\5\5\f ]] 00:28:03.571 23:33:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:03.571 23:33:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:03.571 23:33:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:28:03.571 23:33:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 92041 ]] 00:28:03.571 23:33:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 92041 00:28:03.571 23:33:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:28:03.571 23:33:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:28:03.571 23:33:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:03.571 23:33:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:03.571 23:33:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:03.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:03.571 23:33:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=92249 00:28:03.571 23:33:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:03.571 23:33:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:03.571 23:33:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 92249 00:28:03.571 23:33:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 92249 ']' 00:28:03.571 23:33:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:03.571 23:33:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:03.571 23:33:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:03.571 23:33:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:03.571 23:33:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:03.571 [2024-11-17 23:33:27.367082] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:28:03.571 [2024-11-17 23:33:27.367212] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92249 ] 00:28:03.831 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 92041 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:28:03.831 [2024-11-17 23:33:27.510819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.831 [2024-11-17 23:33:27.533435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.091 [2024-11-17 23:33:27.785904] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:04.091 [2024-11-17 23:33:27.785952] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:04.353 [2024-11-17 23:33:27.931618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.353 [2024-11-17 23:33:27.931780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:04.353 [2024-11-17 23:33:27.931796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:04.353 [2024-11-17 23:33:27.931804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.353 [2024-11-17 23:33:27.931850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.353 [2024-11-17 23:33:27.931858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:04.353 [2024-11-17 23:33:27.931866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:28:04.353 [2024-11-17 23:33:27.931872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.353 [2024-11-17 23:33:27.931902] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:04.353 [2024-11-17 23:33:27.932093] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:04.353 [2024-11-17 23:33:27.932111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.353 [2024-11-17 23:33:27.932118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:04.353 [2024-11-17 23:33:27.932127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.213 ms 00:28:04.353 [2024-11-17 23:33:27.932132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.353 [2024-11-17 23:33:27.932348] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:04.353 [2024-11-17 23:33:27.935782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.353 [2024-11-17 23:33:27.935812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:04.353 [2024-11-17 23:33:27.935827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.435 ms 00:28:04.353 [2024-11-17 23:33:27.935833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.353 [2024-11-17 23:33:27.936644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.353 [2024-11-17 23:33:27.936671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:04.353 [2024-11-17 23:33:27.936683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:28:04.353 [2024-11-17 23:33:27.936689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.353 [2024-11-17 23:33:27.936916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.353 [2024-11-17 23:33:27.936926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:04.353 [2024-11-17 23:33:27.936933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.174 ms 00:28:04.353 [2024-11-17 23:33:27.936940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.353 [2024-11-17 23:33:27.936967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.353 [2024-11-17 23:33:27.936973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:04.353 [2024-11-17 23:33:27.936979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:28:04.353 [2024-11-17 23:33:27.936984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.353 [2024-11-17 23:33:27.937004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.353 [2024-11-17 23:33:27.937010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:04.354 [2024-11-17 23:33:27.937021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:04.354 [2024-11-17 23:33:27.937027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.354 [2024-11-17 23:33:27.937042] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:04.354 [2024-11-17 23:33:27.937782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.354 [2024-11-17 23:33:27.937794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:04.354 [2024-11-17 23:33:27.937802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.743 ms 00:28:04.354 [2024-11-17 23:33:27.937809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.354 [2024-11-17 23:33:27.937826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.354 [2024-11-17 23:33:27.937832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:04.354 [2024-11-17 23:33:27.937842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:04.354 [2024-11-17 23:33:27.937848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.354 [2024-11-17 23:33:27.937864] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:04.354 [2024-11-17 23:33:27.938014] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:04.354 [2024-11-17 23:33:27.938076] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:04.354 [2024-11-17 23:33:27.938110] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:04.354 [2024-11-17 23:33:27.938206] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:04.354 [2024-11-17 23:33:27.938297] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:04.354 [2024-11-17 23:33:27.938321] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:04.354 [2024-11-17 23:33:27.938377] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:04.354 [2024-11-17 23:33:27.938404] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:04.354 [2024-11-17 23:33:27.938427] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:04.354 [2024-11-17 23:33:27.938442] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:04.354 [2024-11-17 23:33:27.938459] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:04.354 [2024-11-17 23:33:27.938473] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:04.354 [2024-11-17 23:33:27.938487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.354 [2024-11-17 23:33:27.938502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:04.354 [2024-11-17 23:33:27.938553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.624 ms 00:28:04.354 [2024-11-17 23:33:27.938570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.354 [2024-11-17 23:33:27.938649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.354 [2024-11-17 23:33:27.938698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:04.354 [2024-11-17 23:33:27.938722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:28:04.354 [2024-11-17 23:33:27.938737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.354 [2024-11-17 23:33:27.938848] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:04.354 [2024-11-17 23:33:27.938909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:04.354 [2024-11-17 23:33:27.938928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:04.354 [2024-11-17 23:33:27.939233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:04.354 [2024-11-17 23:33:27.939564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:04.354 [2024-11-17 23:33:27.939803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:04.354 [2024-11-17 23:33:27.940026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:04.354 [2024-11-17 23:33:27.940065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:04.354 [2024-11-17 23:33:27.940087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:04.354 [2024-11-17 23:33:27.940110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:04.354 [2024-11-17 23:33:27.940132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:04.354 [2024-11-17 23:33:27.940153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:04.354 [2024-11-17 23:33:27.940173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:04.354 [2024-11-17 23:33:27.940194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:04.354 [2024-11-17 23:33:27.940236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:04.354 [2024-11-17 23:33:27.940259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:04.354 [2024-11-17 23:33:27.940280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:04.354 [2024-11-17 23:33:27.940299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:04.354 [2024-11-17 23:33:27.940319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:04.354 [2024-11-17 23:33:27.940340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:04.354 [2024-11-17 23:33:27.940361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:04.354 [2024-11-17 23:33:27.940381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:04.354 [2024-11-17 23:33:27.940402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:04.354 [2024-11-17 23:33:27.940422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:04.354 [2024-11-17 23:33:27.940442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:04.354 [2024-11-17 23:33:27.940462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:04.354 [2024-11-17 23:33:27.940483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:04.354 [2024-11-17 23:33:27.940503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:04.354 [2024-11-17 23:33:27.940524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:04.354 [2024-11-17 23:33:27.940598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:04.354 [2024-11-17 23:33:27.940628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:04.354 [2024-11-17 23:33:27.940653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:04.354 [2024-11-17 23:33:27.940677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:04.354 [2024-11-17 23:33:27.940700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:04.354 [2024-11-17 23:33:27.940724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:04.354 [2024-11-17 23:33:27.940748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:04.354 [2024-11-17 23:33:27.940771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:04.354 [2024-11-17 23:33:27.940794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:04.354 [2024-11-17 23:33:27.940818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:04.354 [2024-11-17 23:33:27.940841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:04.354 [2024-11-17 23:33:27.940865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:04.354 [2024-11-17 23:33:27.940915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:04.354 [2024-11-17 23:33:27.940940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:04.354 [2024-11-17 23:33:27.940965] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:04.354 [2024-11-17 23:33:27.941004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:04.354 [2024-11-17 23:33:27.941030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:04.354 [2024-11-17 23:33:27.941056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:04.354 [2024-11-17 23:33:27.941079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:04.354 [2024-11-17 23:33:27.941100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:04.354 [2024-11-17 23:33:27.941120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:04.354 [2024-11-17 23:33:27.941140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:04.354 [2024-11-17 23:33:27.941161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:04.354 [2024-11-17 23:33:27.941181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:04.354 [2024-11-17 23:33:27.941206] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:04.354 [2024-11-17 23:33:27.941237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:04.354 [2024-11-17 23:33:27.941263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:04.354 [2024-11-17 23:33:27.941285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:04.354 [2024-11-17 23:33:27.941306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:04.354 [2024-11-17 23:33:27.941328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:04.354 [2024-11-17 23:33:27.941350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:04.354 [2024-11-17 23:33:27.941373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:04.354 [2024-11-17 23:33:27.941395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:04.354 [2024-11-17 23:33:27.941422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:04.354 [2024-11-17 23:33:27.941444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:04.354 [2024-11-17 23:33:27.941466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:04.354 [2024-11-17 23:33:27.941487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:04.354 [2024-11-17 23:33:27.941508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:04.354 [2024-11-17 23:33:27.941530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:04.355 [2024-11-17 23:33:27.941553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:04.355 [2024-11-17 23:33:27.941574] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:04.355 [2024-11-17 23:33:27.941599] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:04.355 [2024-11-17 23:33:27.941623] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:04.355 [2024-11-17 23:33:27.941646] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:04.355 [2024-11-17 23:33:27.941669] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:04.355 [2024-11-17 23:33:27.941702] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:04.355 [2024-11-17 23:33:27.941730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.355 [2024-11-17 23:33:27.941754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:04.355 [2024-11-17 23:33:27.941783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.931 ms 00:28:04.355 [2024-11-17 23:33:27.941810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.355 [2024-11-17 23:33:27.950358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.355 [2024-11-17 23:33:27.950499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:04.355 [2024-11-17 23:33:27.950517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.261 ms 00:28:04.355 [2024-11-17 23:33:27.950527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.355 [2024-11-17 23:33:27.950573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.355 [2024-11-17 23:33:27.950584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:04.355 [2024-11-17 23:33:27.950594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:28:04.355 [2024-11-17 23:33:27.950606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.355 [2024-11-17 23:33:27.959634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.355 [2024-11-17 23:33:27.959665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:04.355 [2024-11-17 23:33:27.959674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.965 ms 00:28:04.355 [2024-11-17 23:33:27.959681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.355 [2024-11-17 23:33:27.959710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.355 [2024-11-17 23:33:27.959718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:04.355 [2024-11-17 23:33:27.959731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:04.355 [2024-11-17 23:33:27.959738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.355 [2024-11-17 23:33:27.959810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.355 [2024-11-17 23:33:27.959821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:04.355 [2024-11-17 23:33:27.959831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:28:04.355 [2024-11-17 23:33:27.959839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.355 [2024-11-17 23:33:27.959876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.355 [2024-11-17 23:33:27.959901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:04.355 [2024-11-17 23:33:27.959910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:28:04.355 [2024-11-17 23:33:27.959917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.355 [2024-11-17 23:33:27.965282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.355 [2024-11-17 23:33:27.965391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:04.355 [2024-11-17 23:33:27.965404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.341 ms 00:28:04.355 [2024-11-17 23:33:27.965418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.355 [2024-11-17 23:33:27.965502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.355 [2024-11-17 23:33:27.965512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:28:04.355 [2024-11-17 23:33:27.965520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:04.355 [2024-11-17 23:33:27.965529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.355 [2024-11-17 23:33:27.982383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.355 [2024-11-17 23:33:27.982671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:28:04.355 [2024-11-17 23:33:27.982716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.832 ms 00:28:04.355 [2024-11-17 23:33:27.982738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.355 [2024-11-17 23:33:27.985433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.355 [2024-11-17 23:33:27.985497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:04.355 [2024-11-17 23:33:27.985525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.727 ms 00:28:04.355 [2024-11-17 23:33:27.985554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.355 [2024-11-17 23:33:28.003175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.355 [2024-11-17 23:33:28.003304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:04.355 [2024-11-17 23:33:28.003369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.535 ms 00:28:04.355 [2024-11-17 23:33:28.003393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.355 [2024-11-17 23:33:28.003532] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:28:04.355 [2024-11-17 23:33:28.003647] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:28:04.355 [2024-11-17 23:33:28.003774] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:28:04.355 [2024-11-17 23:33:28.003950] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:28:04.355 [2024-11-17 23:33:28.003963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.355 [2024-11-17 23:33:28.003970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:28:04.355 [2024-11-17 23:33:28.003979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.504 ms 00:28:04.355 [2024-11-17 23:33:28.003991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.355 [2024-11-17 23:33:28.004043] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:28:04.355 [2024-11-17 23:33:28.004054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.355 [2024-11-17 23:33:28.004062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:28:04.355 [2024-11-17 23:33:28.004071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:28:04.355 [2024-11-17 23:33:28.004079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.355 [2024-11-17 23:33:28.007138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.355 [2024-11-17 23:33:28.007170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:28:04.355 [2024-11-17 23:33:28.007180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.039 ms 00:28:04.355 [2024-11-17 23:33:28.007195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.355 [2024-11-17 23:33:28.007770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.355 [2024-11-17 23:33:28.007793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:28:04.355 [2024-11-17 23:33:28.007802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:04.355 [2024-11-17 23:33:28.007810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.355 [2024-11-17 23:33:28.007875] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:28:04.355 [2024-11-17 23:33:28.008039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.355 [2024-11-17 23:33:28.008056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:04.355 [2024-11-17 23:33:28.008066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.166 ms 00:28:04.355 [2024-11-17 23:33:28.008077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.299 [2024-11-17 23:33:28.856649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.299 [2024-11-17 23:33:28.856830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:05.299 [2024-11-17 23:33:28.856852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 848.294 ms 00:28:05.299 [2024-11-17 23:33:28.856874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.299 [2024-11-17 23:33:28.858522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.299 [2024-11-17 23:33:28.858569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:05.299 [2024-11-17 23:33:28.858588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.251 ms 00:28:05.299 [2024-11-17 23:33:28.858596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.299 [2024-11-17 23:33:28.859081] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:28:05.299 [2024-11-17 23:33:28.859105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.299 [2024-11-17 23:33:28.859121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:05.299 [2024-11-17 23:33:28.859131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.486 ms 00:28:05.299 [2024-11-17 23:33:28.859139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.299 [2024-11-17 23:33:28.859172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.299 [2024-11-17 23:33:28.859181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:05.299 [2024-11-17 23:33:28.859193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:05.299 [2024-11-17 23:33:28.859201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.299 [2024-11-17 23:33:28.859234] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 851.356 ms, result 0 00:28:05.299 [2024-11-17 23:33:28.859277] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:28:05.299 [2024-11-17 23:33:28.859336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.299 [2024-11-17 23:33:28.859346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:05.299 [2024-11-17 23:33:28.859354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.059 ms 00:28:05.299 [2024-11-17 23:33:28.859360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.240 [2024-11-17 23:33:29.702509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.240 [2024-11-17 23:33:29.702697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:06.240 [2024-11-17 23:33:29.702755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 842.773 ms 00:28:06.240 [2024-11-17 23:33:29.702775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.240 [2024-11-17 23:33:29.703977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.240 [2024-11-17 23:33:29.704067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:06.240 [2024-11-17 23:33:29.704116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.834 ms 00:28:06.240 [2024-11-17 23:33:29.704135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.240 [2024-11-17 23:33:29.704488] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:28:06.240 [2024-11-17 23:33:29.704597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.240 [2024-11-17 23:33:29.704637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:06.240 [2024-11-17 23:33:29.704655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.426 ms 00:28:06.240 [2024-11-17 23:33:29.704669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.240 [2024-11-17 23:33:29.704701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.240 [2024-11-17 23:33:29.704718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:06.240 [2024-11-17 23:33:29.704734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:06.240 [2024-11-17 23:33:29.704748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.240 [2024-11-17 23:33:29.704787] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 845.510 ms, result 0 00:28:06.240 [2024-11-17 23:33:29.704907] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:06.240 [2024-11-17 23:33:29.704934] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:06.240 [2024-11-17 23:33:29.704958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.241 [2024-11-17 23:33:29.704974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:28:06.241 [2024-11-17 23:33:29.704990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1697.096 ms 00:28:06.241 [2024-11-17 23:33:29.705038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.241 [2024-11-17 23:33:29.705112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.241 [2024-11-17 23:33:29.705133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:28:06.241 [2024-11-17 23:33:29.705177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:06.241 [2024-11-17 23:33:29.705194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.241 [2024-11-17 23:33:29.711311] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:06.241 [2024-11-17 23:33:29.711459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.241 [2024-11-17 23:33:29.711481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:06.241 [2024-11-17 23:33:29.711535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.243 ms 00:28:06.241 [2024-11-17 23:33:29.711552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.241 [2024-11-17 23:33:29.712114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.241 [2024-11-17 23:33:29.712183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:28:06.241 [2024-11-17 23:33:29.712220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.469 ms 00:28:06.241 [2024-11-17 23:33:29.712236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.241 [2024-11-17 23:33:29.713933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.241 [2024-11-17 23:33:29.713996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:28:06.241 [2024-11-17 23:33:29.714038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.674 ms 00:28:06.241 [2024-11-17 23:33:29.714054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.241 [2024-11-17 23:33:29.714103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.241 [2024-11-17 23:33:29.714120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:28:06.241 [2024-11-17 23:33:29.714165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:06.241 [2024-11-17 23:33:29.714182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.241 [2024-11-17 23:33:29.714272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.241 [2024-11-17 23:33:29.714320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:06.241 [2024-11-17 23:33:29.714337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:28:06.241 [2024-11-17 23:33:29.714355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.241 [2024-11-17 23:33:29.714403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.241 [2024-11-17 23:33:29.714421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:06.241 [2024-11-17 23:33:29.714437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:06.241 [2024-11-17 23:33:29.714451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.241 [2024-11-17 23:33:29.714487] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:06.241 [2024-11-17 23:33:29.714536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.241 [2024-11-17 23:33:29.714545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:06.241 [2024-11-17 23:33:29.714552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:28:06.241 [2024-11-17 23:33:29.714557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.241 [2024-11-17 23:33:29.714602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.241 [2024-11-17 23:33:29.714609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:06.241 [2024-11-17 23:33:29.714615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:28:06.241 [2024-11-17 23:33:29.714621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.241 [2024-11-17 23:33:29.715441] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1783.484 ms, result 0 00:28:06.241 [2024-11-17 23:33:29.727813] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:06.241 [2024-11-17 23:33:29.743822] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:06.241 [2024-11-17 23:33:29.751909] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:06.241 Validate MD5 checksum, iteration 1 00:28:06.241 23:33:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:06.241 23:33:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:28:06.241 23:33:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:06.241 23:33:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:06.241 23:33:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:28:06.241 23:33:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:06.241 23:33:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:06.241 23:33:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:06.241 23:33:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:06.241 23:33:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:06.241 23:33:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:06.241 23:33:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:06.241 23:33:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:06.241 23:33:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:06.241 23:33:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:06.241 [2024-11-17 23:33:29.975586] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:28:06.241 [2024-11-17 23:33:29.975698] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92284 ] 00:28:06.501 [2024-11-17 23:33:30.117849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.501 [2024-11-17 23:33:30.141962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:07.882  [2024-11-17T23:33:32.691Z] Copying: 536/1024 [MB] (536 MBps) [2024-11-17T23:33:34.621Z] Copying: 1024/1024 [MB] (average 521 MBps) 00:28:10.800 00:28:10.800 23:33:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:10.800 23:33:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:12.719 23:33:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:12.719 23:33:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=e3ce9a44323da658fd8177ff07bc423c 00:28:12.719 23:33:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ e3ce9a44323da658fd8177ff07bc423c != \e\3\c\e\9\a\4\4\3\2\3\d\a\6\5\8\f\d\8\1\7\7\f\f\0\7\b\c\4\2\3\c ]] 00:28:12.719 23:33:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:12.719 23:33:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:12.719 23:33:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:12.719 Validate MD5 checksum, iteration 2 00:28:12.719 23:33:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:12.719 23:33:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:12.719 23:33:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:12.719 23:33:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:12.719 23:33:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:12.719 23:33:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:12.979 [2024-11-17 23:33:36.544256] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:28:12.979 [2024-11-17 23:33:36.544536] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92359 ] 00:28:12.979 [2024-11-17 23:33:36.685600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.979 [2024-11-17 23:33:36.702836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:14.367  [2024-11-17T23:33:38.761Z] Copying: 618/1024 [MB] (618 MBps) [2024-11-17T23:33:41.301Z] Copying: 1024/1024 [MB] (average 622 MBps) 00:28:17.480 00:28:17.480 23:33:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:17.480 23:33:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:18.867 23:33:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:18.867 23:33:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=a6eb187a515eadd8c20489eb2df7855f 00:28:18.867 23:33:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ a6eb187a515eadd8c20489eb2df7855f != \a\6\e\b\1\8\7\a\5\1\5\e\a\d\d\8\c\2\0\4\8\9\e\b\2\d\f\7\8\5\5\f ]] 00:28:18.868 23:33:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:18.868 23:33:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:18.868 23:33:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:28:18.868 23:33:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:28:18.868 23:33:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:28:18.868 23:33:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:18.868 23:33:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:28:18.868 23:33:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:28:18.868 23:33:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:28:18.868 23:33:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:28:18.868 23:33:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 92249 ]] 00:28:18.868 23:33:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 92249 00:28:18.868 23:33:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 92249 ']' 00:28:18.868 23:33:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 92249 00:28:18.868 23:33:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:28:18.868 23:33:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:18.868 23:33:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92249 00:28:18.868 killing process with pid 92249 00:28:18.868 23:33:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:18.868 23:33:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:18.868 23:33:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92249' 00:28:18.868 23:33:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 92249 00:28:18.868 23:33:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 92249 00:28:19.134 [2024-11-17 23:33:42.718127] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:19.134 [2024-11-17 23:33:42.724237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.134 [2024-11-17 23:33:42.724272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:19.134 [2024-11-17 23:33:42.724284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:19.134 [2024-11-17 23:33:42.724291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.134 [2024-11-17 23:33:42.724309] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:19.134 [2024-11-17 23:33:42.724832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.134 [2024-11-17 23:33:42.724852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:19.134 [2024-11-17 23:33:42.724860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.512 ms 00:28:19.134 [2024-11-17 23:33:42.724870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.134 [2024-11-17 23:33:42.725066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.134 [2024-11-17 23:33:42.725075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:19.134 [2024-11-17 23:33:42.725082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.169 ms 00:28:19.134 [2024-11-17 23:33:42.725089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.134 [2024-11-17 23:33:42.726259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.134 [2024-11-17 23:33:42.726284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:19.134 [2024-11-17 23:33:42.726292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.157 ms 00:28:19.134 [2024-11-17 23:33:42.726298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.134 [2024-11-17 23:33:42.727221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.134 [2024-11-17 23:33:42.727360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:19.134 [2024-11-17 23:33:42.727372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.894 ms 00:28:19.134 [2024-11-17 23:33:42.727378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.134 [2024-11-17 23:33:42.729060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.134 [2024-11-17 23:33:42.729085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:19.134 [2024-11-17 23:33:42.729093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.651 ms 00:28:19.134 [2024-11-17 23:33:42.729104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.134 [2024-11-17 23:33:42.730644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.134 [2024-11-17 23:33:42.730741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:19.134 [2024-11-17 23:33:42.730754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.424 ms 00:28:19.134 [2024-11-17 23:33:42.730762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.134 [2024-11-17 23:33:42.730825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.134 [2024-11-17 23:33:42.730833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:19.134 [2024-11-17 23:33:42.730840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:28:19.134 [2024-11-17 23:33:42.730846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.134 [2024-11-17 23:33:42.733017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.134 [2024-11-17 23:33:42.733044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:19.134 [2024-11-17 23:33:42.733051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.152 ms 00:28:19.134 [2024-11-17 23:33:42.733064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.134 [2024-11-17 23:33:42.735127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.134 [2024-11-17 23:33:42.735153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:19.134 [2024-11-17 23:33:42.735160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.037 ms 00:28:19.134 [2024-11-17 23:33:42.735165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.134 [2024-11-17 23:33:42.736632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.134 [2024-11-17 23:33:42.736657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:19.134 [2024-11-17 23:33:42.736665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.443 ms 00:28:19.134 [2024-11-17 23:33:42.736670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.134 [2024-11-17 23:33:42.738315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.134 [2024-11-17 23:33:42.738340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:19.134 [2024-11-17 23:33:42.738347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.598 ms 00:28:19.134 [2024-11-17 23:33:42.738353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.134 [2024-11-17 23:33:42.738377] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:19.134 [2024-11-17 23:33:42.738389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:19.134 [2024-11-17 23:33:42.738397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:19.134 [2024-11-17 23:33:42.738404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:19.134 [2024-11-17 23:33:42.738410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:19.134 [2024-11-17 23:33:42.738417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:19.134 [2024-11-17 23:33:42.738423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:19.134 [2024-11-17 23:33:42.738429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:19.134 [2024-11-17 23:33:42.738435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:19.134 [2024-11-17 23:33:42.738441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:19.134 [2024-11-17 23:33:42.738447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:19.134 [2024-11-17 23:33:42.738453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:19.134 [2024-11-17 23:33:42.738458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:19.134 [2024-11-17 23:33:42.738464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:19.134 [2024-11-17 23:33:42.738470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:19.134 [2024-11-17 23:33:42.738475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:19.134 [2024-11-17 23:33:42.738481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:19.134 [2024-11-17 23:33:42.738487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:19.134 [2024-11-17 23:33:42.738493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:19.134 [2024-11-17 23:33:42.738500] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:19.134 [2024-11-17 23:33:42.738506] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 0e11daf4-0220-4a96-85b9-19e494dd5570 00:28:19.134 [2024-11-17 23:33:42.738512] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:19.134 [2024-11-17 23:33:42.738518] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:28:19.134 [2024-11-17 23:33:42.738523] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:28:19.134 [2024-11-17 23:33:42.738529] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:28:19.134 [2024-11-17 23:33:42.738535] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:19.134 [2024-11-17 23:33:42.738540] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:19.134 [2024-11-17 23:33:42.738546] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:19.134 [2024-11-17 23:33:42.738550] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:19.134 [2024-11-17 23:33:42.738556] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:19.134 [2024-11-17 23:33:42.738563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.134 [2024-11-17 23:33:42.738572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:19.134 [2024-11-17 23:33:42.738579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.186 ms 00:28:19.134 [2024-11-17 23:33:42.738585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.134 [2024-11-17 23:33:42.740236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.135 [2024-11-17 23:33:42.740259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:19.135 [2024-11-17 23:33:42.740266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.637 ms 00:28:19.135 [2024-11-17 23:33:42.740272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.135 [2024-11-17 23:33:42.740364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.135 [2024-11-17 23:33:42.740370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:19.135 [2024-11-17 23:33:42.740377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.079 ms 00:28:19.135 [2024-11-17 23:33:42.740382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.135 [2024-11-17 23:33:42.746369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.135 [2024-11-17 23:33:42.746453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:19.135 [2024-11-17 23:33:42.746493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.135 [2024-11-17 23:33:42.746517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.135 [2024-11-17 23:33:42.746557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.135 [2024-11-17 23:33:42.746573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:19.135 [2024-11-17 23:33:42.746588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.135 [2024-11-17 23:33:42.746602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.135 [2024-11-17 23:33:42.746658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.135 [2024-11-17 23:33:42.746767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:19.135 [2024-11-17 23:33:42.746789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.135 [2024-11-17 23:33:42.746804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.135 [2024-11-17 23:33:42.746828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.135 [2024-11-17 23:33:42.746847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:19.135 [2024-11-17 23:33:42.746862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.135 [2024-11-17 23:33:42.746907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.135 [2024-11-17 23:33:42.758087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.135 [2024-11-17 23:33:42.759084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:19.135 [2024-11-17 23:33:42.759148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.135 [2024-11-17 23:33:42.759174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.135 [2024-11-17 23:33:42.767563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.135 [2024-11-17 23:33:42.767675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:19.135 [2024-11-17 23:33:42.767715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.135 [2024-11-17 23:33:42.767734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.135 [2024-11-17 23:33:42.767819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.135 [2024-11-17 23:33:42.767845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:19.135 [2024-11-17 23:33:42.767861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.135 [2024-11-17 23:33:42.767876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.135 [2024-11-17 23:33:42.767960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.135 [2024-11-17 23:33:42.768026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:19.135 [2024-11-17 23:33:42.768048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.135 [2024-11-17 23:33:42.768063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.135 [2024-11-17 23:33:42.768143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.135 [2024-11-17 23:33:42.768167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:19.135 [2024-11-17 23:33:42.768185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.135 [2024-11-17 23:33:42.768199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.135 [2024-11-17 23:33:42.768265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.135 [2024-11-17 23:33:42.768290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:19.135 [2024-11-17 23:33:42.768306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.135 [2024-11-17 23:33:42.768353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.135 [2024-11-17 23:33:42.768397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.135 [2024-11-17 23:33:42.768408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:19.135 [2024-11-17 23:33:42.768415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.135 [2024-11-17 23:33:42.768422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.135 [2024-11-17 23:33:42.768466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.135 [2024-11-17 23:33:42.768474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:19.135 [2024-11-17 23:33:42.768484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.135 [2024-11-17 23:33:42.768490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.135 [2024-11-17 23:33:42.768617] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 44.350 ms, result 0 00:28:21.693 23:33:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:21.693 23:33:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:21.693 23:33:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:28:21.693 23:33:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:28:21.693 23:33:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:28:21.693 23:33:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:21.693 Remove shared memory files 00:28:21.693 23:33:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:28:21.693 23:33:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:21.693 23:33:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:21.693 23:33:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:21.693 23:33:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid92041 00:28:21.693 23:33:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:21.693 23:33:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:21.693 ************************************ 00:28:21.693 END TEST ftl_upgrade_shutdown 00:28:21.693 ************************************ 00:28:21.693 00:28:21.693 real 1m20.054s 00:28:21.693 user 1m44.356s 00:28:21.693 sys 0m20.174s 00:28:21.693 23:33:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:21.693 23:33:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:21.693 23:33:45 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:28:21.693 23:33:45 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:21.693 23:33:45 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:28:21.693 23:33:45 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:21.693 23:33:45 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:21.693 ************************************ 00:28:21.693 START TEST ftl_restore_fast 00:28:21.693 ************************************ 00:28:21.693 23:33:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:21.954 * Looking for test storage... 00:28:21.954 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lcov --version 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:21.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.954 --rc genhtml_branch_coverage=1 00:28:21.954 --rc genhtml_function_coverage=1 00:28:21.954 --rc genhtml_legend=1 00:28:21.954 --rc geninfo_all_blocks=1 00:28:21.954 --rc geninfo_unexecuted_blocks=1 00:28:21.954 00:28:21.954 ' 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:21.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.954 --rc genhtml_branch_coverage=1 00:28:21.954 --rc genhtml_function_coverage=1 00:28:21.954 --rc genhtml_legend=1 00:28:21.954 --rc geninfo_all_blocks=1 00:28:21.954 --rc geninfo_unexecuted_blocks=1 00:28:21.954 00:28:21.954 ' 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:21.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.954 --rc genhtml_branch_coverage=1 00:28:21.954 --rc genhtml_function_coverage=1 00:28:21.954 --rc genhtml_legend=1 00:28:21.954 --rc geninfo_all_blocks=1 00:28:21.954 --rc geninfo_unexecuted_blocks=1 00:28:21.954 00:28:21.954 ' 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:21.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.954 --rc genhtml_branch_coverage=1 00:28:21.954 --rc genhtml_function_coverage=1 00:28:21.954 --rc genhtml_legend=1 00:28:21.954 --rc geninfo_all_blocks=1 00:28:21.954 --rc geninfo_unexecuted_blocks=1 00:28:21.954 00:28:21.954 ' 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.gYWIiOPGYG 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:28:21.954 23:33:45 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:28:21.955 23:33:45 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:28:21.955 23:33:45 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:28:21.955 23:33:45 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=92510 00:28:21.955 23:33:45 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 92510 00:28:21.955 23:33:45 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 92510 ']' 00:28:21.955 23:33:45 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:21.955 23:33:45 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:21.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:21.955 23:33:45 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:21.955 23:33:45 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:21.955 23:33:45 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:21.955 23:33:45 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:28:21.955 [2024-11-17 23:33:45.728311] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:28:21.955 [2024-11-17 23:33:45.728674] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92510 ] 00:28:22.214 [2024-11-17 23:33:45.874058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.214 [2024-11-17 23:33:45.908565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.781 23:33:46 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:22.781 23:33:46 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:28:22.781 23:33:46 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:22.781 23:33:46 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:28:22.781 23:33:46 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:22.781 23:33:46 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:28:22.781 23:33:46 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:28:22.781 23:33:46 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:23.039 23:33:46 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:23.039 23:33:46 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:28:23.039 23:33:46 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:23.039 23:33:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:28:23.039 23:33:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:23.039 23:33:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:28:23.039 23:33:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:28:23.039 23:33:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:23.296 23:33:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:23.296 { 00:28:23.296 "name": "nvme0n1", 00:28:23.296 "aliases": [ 00:28:23.296 "850aa994-f53a-4000-8820-72f021d49d91" 00:28:23.296 ], 00:28:23.296 "product_name": "NVMe disk", 00:28:23.296 "block_size": 4096, 00:28:23.296 "num_blocks": 1310720, 00:28:23.296 "uuid": "850aa994-f53a-4000-8820-72f021d49d91", 00:28:23.296 "numa_id": -1, 00:28:23.296 "assigned_rate_limits": { 00:28:23.296 "rw_ios_per_sec": 0, 00:28:23.296 "rw_mbytes_per_sec": 0, 00:28:23.296 "r_mbytes_per_sec": 0, 00:28:23.296 "w_mbytes_per_sec": 0 00:28:23.296 }, 00:28:23.296 "claimed": true, 00:28:23.296 "claim_type": "read_many_write_one", 00:28:23.296 "zoned": false, 00:28:23.296 "supported_io_types": { 00:28:23.296 "read": true, 00:28:23.296 "write": true, 00:28:23.296 "unmap": true, 00:28:23.296 "flush": true, 00:28:23.296 "reset": true, 00:28:23.296 "nvme_admin": true, 00:28:23.296 "nvme_io": true, 00:28:23.296 "nvme_io_md": false, 00:28:23.296 "write_zeroes": true, 00:28:23.296 "zcopy": false, 00:28:23.296 "get_zone_info": false, 00:28:23.296 "zone_management": false, 00:28:23.296 "zone_append": false, 00:28:23.296 "compare": true, 00:28:23.296 "compare_and_write": false, 00:28:23.296 "abort": true, 00:28:23.296 "seek_hole": false, 00:28:23.296 "seek_data": false, 00:28:23.296 "copy": true, 00:28:23.296 "nvme_iov_md": false 00:28:23.296 }, 00:28:23.296 "driver_specific": { 00:28:23.296 "nvme": [ 00:28:23.296 { 00:28:23.296 "pci_address": "0000:00:11.0", 00:28:23.296 "trid": { 00:28:23.296 "trtype": "PCIe", 00:28:23.296 "traddr": "0000:00:11.0" 00:28:23.296 }, 00:28:23.296 "ctrlr_data": { 00:28:23.296 "cntlid": 0, 00:28:23.296 "vendor_id": "0x1b36", 00:28:23.296 "model_number": "QEMU NVMe Ctrl", 00:28:23.296 "serial_number": "12341", 00:28:23.296 "firmware_revision": "8.0.0", 00:28:23.296 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:23.296 "oacs": { 00:28:23.296 "security": 0, 00:28:23.296 "format": 1, 00:28:23.296 "firmware": 0, 00:28:23.296 "ns_manage": 1 00:28:23.296 }, 00:28:23.296 "multi_ctrlr": false, 00:28:23.296 "ana_reporting": false 00:28:23.296 }, 00:28:23.296 "vs": { 00:28:23.296 "nvme_version": "1.4" 00:28:23.296 }, 00:28:23.296 "ns_data": { 00:28:23.296 "id": 1, 00:28:23.296 "can_share": false 00:28:23.296 } 00:28:23.296 } 00:28:23.296 ], 00:28:23.296 "mp_policy": "active_passive" 00:28:23.296 } 00:28:23.296 } 00:28:23.296 ]' 00:28:23.296 23:33:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:23.296 23:33:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:28:23.296 23:33:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:23.554 23:33:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:28:23.554 23:33:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:28:23.554 23:33:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:28:23.554 23:33:47 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:28:23.554 23:33:47 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:23.554 23:33:47 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:28:23.554 23:33:47 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:23.554 23:33:47 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:23.555 23:33:47 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=f9f58f5e-ee97-4ed2-9917-e676ca3cb10a 00:28:23.555 23:33:47 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:28:23.555 23:33:47 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f9f58f5e-ee97-4ed2-9917-e676ca3cb10a 00:28:23.813 23:33:47 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:24.070 23:33:47 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=b1e8b18b-abde-4e30-aaf7-4a0b2656f704 00:28:24.070 23:33:47 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b1e8b18b-abde-4e30-aaf7-4a0b2656f704 00:28:24.328 23:33:47 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=2acf6a8b-0d50-4189-8497-062bd5dcf76f 00:28:24.329 23:33:47 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:28:24.329 23:33:47 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 2acf6a8b-0d50-4189-8497-062bd5dcf76f 00:28:24.329 23:33:47 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:28:24.329 23:33:47 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:24.329 23:33:47 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=2acf6a8b-0d50-4189-8497-062bd5dcf76f 00:28:24.329 23:33:47 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:28:24.329 23:33:47 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 2acf6a8b-0d50-4189-8497-062bd5dcf76f 00:28:24.329 23:33:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=2acf6a8b-0d50-4189-8497-062bd5dcf76f 00:28:24.329 23:33:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:24.329 23:33:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:28:24.329 23:33:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:28:24.329 23:33:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2acf6a8b-0d50-4189-8497-062bd5dcf76f 00:28:24.587 23:33:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:24.587 { 00:28:24.587 "name": "2acf6a8b-0d50-4189-8497-062bd5dcf76f", 00:28:24.587 "aliases": [ 00:28:24.587 "lvs/nvme0n1p0" 00:28:24.587 ], 00:28:24.587 "product_name": "Logical Volume", 00:28:24.587 "block_size": 4096, 00:28:24.587 "num_blocks": 26476544, 00:28:24.587 "uuid": "2acf6a8b-0d50-4189-8497-062bd5dcf76f", 00:28:24.587 "assigned_rate_limits": { 00:28:24.587 "rw_ios_per_sec": 0, 00:28:24.587 "rw_mbytes_per_sec": 0, 00:28:24.587 "r_mbytes_per_sec": 0, 00:28:24.587 "w_mbytes_per_sec": 0 00:28:24.587 }, 00:28:24.587 "claimed": false, 00:28:24.587 "zoned": false, 00:28:24.587 "supported_io_types": { 00:28:24.587 "read": true, 00:28:24.587 "write": true, 00:28:24.587 "unmap": true, 00:28:24.587 "flush": false, 00:28:24.587 "reset": true, 00:28:24.587 "nvme_admin": false, 00:28:24.587 "nvme_io": false, 00:28:24.587 "nvme_io_md": false, 00:28:24.587 "write_zeroes": true, 00:28:24.587 "zcopy": false, 00:28:24.587 "get_zone_info": false, 00:28:24.587 "zone_management": false, 00:28:24.587 "zone_append": false, 00:28:24.587 "compare": false, 00:28:24.587 "compare_and_write": false, 00:28:24.587 "abort": false, 00:28:24.587 "seek_hole": true, 00:28:24.587 "seek_data": true, 00:28:24.587 "copy": false, 00:28:24.587 "nvme_iov_md": false 00:28:24.587 }, 00:28:24.587 "driver_specific": { 00:28:24.587 "lvol": { 00:28:24.587 "lvol_store_uuid": "b1e8b18b-abde-4e30-aaf7-4a0b2656f704", 00:28:24.587 "base_bdev": "nvme0n1", 00:28:24.587 "thin_provision": true, 00:28:24.587 "num_allocated_clusters": 0, 00:28:24.587 "snapshot": false, 00:28:24.587 "clone": false, 00:28:24.587 "esnap_clone": false 00:28:24.587 } 00:28:24.587 } 00:28:24.587 } 00:28:24.587 ]' 00:28:24.587 23:33:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:24.587 23:33:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:28:24.587 23:33:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:24.587 23:33:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:24.587 23:33:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:24.587 23:33:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:28:24.587 23:33:48 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:28:24.587 23:33:48 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:28:24.587 23:33:48 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:24.844 23:33:48 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:24.844 23:33:48 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:24.844 23:33:48 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 2acf6a8b-0d50-4189-8497-062bd5dcf76f 00:28:24.844 23:33:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=2acf6a8b-0d50-4189-8497-062bd5dcf76f 00:28:24.844 23:33:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:24.844 23:33:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:28:24.844 23:33:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:28:24.844 23:33:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2acf6a8b-0d50-4189-8497-062bd5dcf76f 00:28:25.102 23:33:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:25.102 { 00:28:25.102 "name": "2acf6a8b-0d50-4189-8497-062bd5dcf76f", 00:28:25.102 "aliases": [ 00:28:25.102 "lvs/nvme0n1p0" 00:28:25.102 ], 00:28:25.102 "product_name": "Logical Volume", 00:28:25.102 "block_size": 4096, 00:28:25.102 "num_blocks": 26476544, 00:28:25.102 "uuid": "2acf6a8b-0d50-4189-8497-062bd5dcf76f", 00:28:25.102 "assigned_rate_limits": { 00:28:25.102 "rw_ios_per_sec": 0, 00:28:25.102 "rw_mbytes_per_sec": 0, 00:28:25.102 "r_mbytes_per_sec": 0, 00:28:25.102 "w_mbytes_per_sec": 0 00:28:25.102 }, 00:28:25.102 "claimed": false, 00:28:25.102 "zoned": false, 00:28:25.102 "supported_io_types": { 00:28:25.102 "read": true, 00:28:25.102 "write": true, 00:28:25.102 "unmap": true, 00:28:25.102 "flush": false, 00:28:25.102 "reset": true, 00:28:25.102 "nvme_admin": false, 00:28:25.102 "nvme_io": false, 00:28:25.102 "nvme_io_md": false, 00:28:25.102 "write_zeroes": true, 00:28:25.102 "zcopy": false, 00:28:25.102 "get_zone_info": false, 00:28:25.102 "zone_management": false, 00:28:25.102 "zone_append": false, 00:28:25.102 "compare": false, 00:28:25.102 "compare_and_write": false, 00:28:25.102 "abort": false, 00:28:25.102 "seek_hole": true, 00:28:25.102 "seek_data": true, 00:28:25.102 "copy": false, 00:28:25.102 "nvme_iov_md": false 00:28:25.102 }, 00:28:25.102 "driver_specific": { 00:28:25.102 "lvol": { 00:28:25.102 "lvol_store_uuid": "b1e8b18b-abde-4e30-aaf7-4a0b2656f704", 00:28:25.102 "base_bdev": "nvme0n1", 00:28:25.102 "thin_provision": true, 00:28:25.102 "num_allocated_clusters": 0, 00:28:25.102 "snapshot": false, 00:28:25.102 "clone": false, 00:28:25.102 "esnap_clone": false 00:28:25.102 } 00:28:25.102 } 00:28:25.102 } 00:28:25.102 ]' 00:28:25.102 23:33:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:25.102 23:33:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:28:25.102 23:33:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:25.102 23:33:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:25.102 23:33:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:25.102 23:33:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:28:25.102 23:33:48 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:28:25.102 23:33:48 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:25.361 23:33:48 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:28:25.361 23:33:48 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 2acf6a8b-0d50-4189-8497-062bd5dcf76f 00:28:25.361 23:33:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=2acf6a8b-0d50-4189-8497-062bd5dcf76f 00:28:25.361 23:33:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:25.361 23:33:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:28:25.361 23:33:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:28:25.361 23:33:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2acf6a8b-0d50-4189-8497-062bd5dcf76f 00:28:25.361 23:33:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:25.361 { 00:28:25.361 "name": "2acf6a8b-0d50-4189-8497-062bd5dcf76f", 00:28:25.361 "aliases": [ 00:28:25.361 "lvs/nvme0n1p0" 00:28:25.361 ], 00:28:25.361 "product_name": "Logical Volume", 00:28:25.361 "block_size": 4096, 00:28:25.361 "num_blocks": 26476544, 00:28:25.361 "uuid": "2acf6a8b-0d50-4189-8497-062bd5dcf76f", 00:28:25.361 "assigned_rate_limits": { 00:28:25.361 "rw_ios_per_sec": 0, 00:28:25.361 "rw_mbytes_per_sec": 0, 00:28:25.361 "r_mbytes_per_sec": 0, 00:28:25.361 "w_mbytes_per_sec": 0 00:28:25.361 }, 00:28:25.361 "claimed": false, 00:28:25.361 "zoned": false, 00:28:25.361 "supported_io_types": { 00:28:25.361 "read": true, 00:28:25.361 "write": true, 00:28:25.361 "unmap": true, 00:28:25.361 "flush": false, 00:28:25.361 "reset": true, 00:28:25.361 "nvme_admin": false, 00:28:25.361 "nvme_io": false, 00:28:25.361 "nvme_io_md": false, 00:28:25.361 "write_zeroes": true, 00:28:25.361 "zcopy": false, 00:28:25.361 "get_zone_info": false, 00:28:25.361 "zone_management": false, 00:28:25.361 "zone_append": false, 00:28:25.361 "compare": false, 00:28:25.361 "compare_and_write": false, 00:28:25.361 "abort": false, 00:28:25.361 "seek_hole": true, 00:28:25.361 "seek_data": true, 00:28:25.361 "copy": false, 00:28:25.361 "nvme_iov_md": false 00:28:25.361 }, 00:28:25.361 "driver_specific": { 00:28:25.361 "lvol": { 00:28:25.361 "lvol_store_uuid": "b1e8b18b-abde-4e30-aaf7-4a0b2656f704", 00:28:25.361 "base_bdev": "nvme0n1", 00:28:25.361 "thin_provision": true, 00:28:25.361 "num_allocated_clusters": 0, 00:28:25.361 "snapshot": false, 00:28:25.361 "clone": false, 00:28:25.361 "esnap_clone": false 00:28:25.361 } 00:28:25.361 } 00:28:25.361 } 00:28:25.361 ]' 00:28:25.361 23:33:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:25.361 23:33:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:28:25.620 23:33:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:25.620 23:33:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:25.620 23:33:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:25.620 23:33:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:28:25.620 23:33:49 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:28:25.620 23:33:49 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 2acf6a8b-0d50-4189-8497-062bd5dcf76f --l2p_dram_limit 10' 00:28:25.620 23:33:49 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:28:25.620 23:33:49 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:28:25.620 23:33:49 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:28:25.620 23:33:49 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:28:25.620 23:33:49 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:28:25.620 23:33:49 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 2acf6a8b-0d50-4189-8497-062bd5dcf76f --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:28:25.620 [2024-11-17 23:33:49.403019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.620 [2024-11-17 23:33:49.403060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:25.620 [2024-11-17 23:33:49.403072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:25.620 [2024-11-17 23:33:49.403081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.620 [2024-11-17 23:33:49.403128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.620 [2024-11-17 23:33:49.403138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:25.620 [2024-11-17 23:33:49.403151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:28:25.621 [2024-11-17 23:33:49.403161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.621 [2024-11-17 23:33:49.403179] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:25.621 [2024-11-17 23:33:49.403401] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:25.621 [2024-11-17 23:33:49.403422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.621 [2024-11-17 23:33:49.403430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:25.621 [2024-11-17 23:33:49.403440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:28:25.621 [2024-11-17 23:33:49.403448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.621 [2024-11-17 23:33:49.403500] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 86b6b1dc-b32b-40df-8af2-a502fed8f36d 00:28:25.621 [2024-11-17 23:33:49.404814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.621 [2024-11-17 23:33:49.404837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:25.621 [2024-11-17 23:33:49.404849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:28:25.621 [2024-11-17 23:33:49.404859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.621 [2024-11-17 23:33:49.411703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.621 [2024-11-17 23:33:49.411818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:25.621 [2024-11-17 23:33:49.411834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.797 ms 00:28:25.621 [2024-11-17 23:33:49.411840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.621 [2024-11-17 23:33:49.411918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.621 [2024-11-17 23:33:49.411926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:25.621 [2024-11-17 23:33:49.411934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:28:25.621 [2024-11-17 23:33:49.411940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.621 [2024-11-17 23:33:49.411976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.621 [2024-11-17 23:33:49.411984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:25.621 [2024-11-17 23:33:49.411992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:25.621 [2024-11-17 23:33:49.411999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.621 [2024-11-17 23:33:49.412017] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:25.621 [2024-11-17 23:33:49.413670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.621 [2024-11-17 23:33:49.413702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:25.621 [2024-11-17 23:33:49.413710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.658 ms 00:28:25.621 [2024-11-17 23:33:49.413721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.621 [2024-11-17 23:33:49.413750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.621 [2024-11-17 23:33:49.413759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:25.621 [2024-11-17 23:33:49.413769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:25.621 [2024-11-17 23:33:49.413780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.621 [2024-11-17 23:33:49.413800] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:25.621 [2024-11-17 23:33:49.413929] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:25.621 [2024-11-17 23:33:49.413939] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:25.621 [2024-11-17 23:33:49.413949] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:25.621 [2024-11-17 23:33:49.413957] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:25.621 [2024-11-17 23:33:49.413970] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:25.621 [2024-11-17 23:33:49.413977] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:25.621 [2024-11-17 23:33:49.413987] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:25.621 [2024-11-17 23:33:49.413992] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:25.621 [2024-11-17 23:33:49.413999] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:25.621 [2024-11-17 23:33:49.414005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.621 [2024-11-17 23:33:49.414013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:25.621 [2024-11-17 23:33:49.414019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:28:25.621 [2024-11-17 23:33:49.414027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.621 [2024-11-17 23:33:49.414091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.621 [2024-11-17 23:33:49.414101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:25.621 [2024-11-17 23:33:49.414109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:28:25.621 [2024-11-17 23:33:49.414118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.621 [2024-11-17 23:33:49.414193] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:25.621 [2024-11-17 23:33:49.414202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:25.621 [2024-11-17 23:33:49.414209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:25.621 [2024-11-17 23:33:49.414216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:25.621 [2024-11-17 23:33:49.414222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:25.621 [2024-11-17 23:33:49.414229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:25.621 [2024-11-17 23:33:49.414235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:25.621 [2024-11-17 23:33:49.414243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:25.621 [2024-11-17 23:33:49.414248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:25.621 [2024-11-17 23:33:49.414255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:25.621 [2024-11-17 23:33:49.414260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:25.621 [2024-11-17 23:33:49.414268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:25.621 [2024-11-17 23:33:49.414274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:25.621 [2024-11-17 23:33:49.414283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:25.621 [2024-11-17 23:33:49.414291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:25.621 [2024-11-17 23:33:49.414298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:25.621 [2024-11-17 23:33:49.414304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:25.621 [2024-11-17 23:33:49.414312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:25.621 [2024-11-17 23:33:49.414316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:25.621 [2024-11-17 23:33:49.414324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:25.621 [2024-11-17 23:33:49.414330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:25.621 [2024-11-17 23:33:49.414338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:25.621 [2024-11-17 23:33:49.414343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:25.621 [2024-11-17 23:33:49.414350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:25.621 [2024-11-17 23:33:49.414356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:25.621 [2024-11-17 23:33:49.414363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:25.621 [2024-11-17 23:33:49.414370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:25.621 [2024-11-17 23:33:49.414378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:25.621 [2024-11-17 23:33:49.414384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:25.621 [2024-11-17 23:33:49.414396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:25.621 [2024-11-17 23:33:49.414402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:25.621 [2024-11-17 23:33:49.414410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:25.621 [2024-11-17 23:33:49.414416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:25.621 [2024-11-17 23:33:49.414423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:25.621 [2024-11-17 23:33:49.414429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:25.621 [2024-11-17 23:33:49.414437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:25.621 [2024-11-17 23:33:49.414443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:25.621 [2024-11-17 23:33:49.414450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:25.621 [2024-11-17 23:33:49.414455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:25.621 [2024-11-17 23:33:49.414462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:25.621 [2024-11-17 23:33:49.414469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:25.621 [2024-11-17 23:33:49.414476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:25.621 [2024-11-17 23:33:49.414481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:25.621 [2024-11-17 23:33:49.414488] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:25.621 [2024-11-17 23:33:49.414495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:25.621 [2024-11-17 23:33:49.414505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:25.621 [2024-11-17 23:33:49.414521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:25.621 [2024-11-17 23:33:49.414530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:25.621 [2024-11-17 23:33:49.414536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:25.621 [2024-11-17 23:33:49.414544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:25.622 [2024-11-17 23:33:49.414550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:25.622 [2024-11-17 23:33:49.414558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:25.622 [2024-11-17 23:33:49.414564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:25.622 [2024-11-17 23:33:49.414574] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:25.622 [2024-11-17 23:33:49.414586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:25.622 [2024-11-17 23:33:49.414595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:25.622 [2024-11-17 23:33:49.414601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:25.622 [2024-11-17 23:33:49.414609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:25.622 [2024-11-17 23:33:49.414615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:25.622 [2024-11-17 23:33:49.414623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:25.622 [2024-11-17 23:33:49.414631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:25.622 [2024-11-17 23:33:49.414640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:25.622 [2024-11-17 23:33:49.414646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:25.622 [2024-11-17 23:33:49.414654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:25.622 [2024-11-17 23:33:49.414661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:25.622 [2024-11-17 23:33:49.414668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:25.622 [2024-11-17 23:33:49.414674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:25.622 [2024-11-17 23:33:49.414682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:25.622 [2024-11-17 23:33:49.414689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:25.622 [2024-11-17 23:33:49.414697] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:25.622 [2024-11-17 23:33:49.414704] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:25.622 [2024-11-17 23:33:49.414712] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:25.622 [2024-11-17 23:33:49.414719] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:25.622 [2024-11-17 23:33:49.414726] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:25.622 [2024-11-17 23:33:49.414733] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:25.622 [2024-11-17 23:33:49.414741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.622 [2024-11-17 23:33:49.414747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:25.622 [2024-11-17 23:33:49.414759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.598 ms 00:28:25.622 [2024-11-17 23:33:49.414768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.622 [2024-11-17 23:33:49.414799] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:25.622 [2024-11-17 23:33:49.414806] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:28:29.846 [2024-11-17 23:33:53.426995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.846 [2024-11-17 23:33:53.427199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:28:29.846 [2024-11-17 23:33:53.427222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4012.176 ms 00:28:29.846 [2024-11-17 23:33:53.427230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.846 [2024-11-17 23:33:53.439017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.847 [2024-11-17 23:33:53.439052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:29.847 [2024-11-17 23:33:53.439065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.701 ms 00:28:29.847 [2024-11-17 23:33:53.439072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.847 [2024-11-17 23:33:53.439172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.847 [2024-11-17 23:33:53.439179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:29.847 [2024-11-17 23:33:53.439189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:28:29.847 [2024-11-17 23:33:53.439195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.847 [2024-11-17 23:33:53.449784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.847 [2024-11-17 23:33:53.449817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:29.847 [2024-11-17 23:33:53.449828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.556 ms 00:28:29.847 [2024-11-17 23:33:53.449838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.847 [2024-11-17 23:33:53.449862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.847 [2024-11-17 23:33:53.449869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:29.847 [2024-11-17 23:33:53.449903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:29.847 [2024-11-17 23:33:53.449910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.847 [2024-11-17 23:33:53.450342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.847 [2024-11-17 23:33:53.450357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:29.847 [2024-11-17 23:33:53.450367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.394 ms 00:28:29.847 [2024-11-17 23:33:53.450378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.847 [2024-11-17 23:33:53.450478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.847 [2024-11-17 23:33:53.450493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:29.847 [2024-11-17 23:33:53.450503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:28:29.847 [2024-11-17 23:33:53.450510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.847 [2024-11-17 23:33:53.457316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.847 [2024-11-17 23:33:53.457458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:29.847 [2024-11-17 23:33:53.457474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.785 ms 00:28:29.847 [2024-11-17 23:33:53.457480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.847 [2024-11-17 23:33:53.465176] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:29.847 [2024-11-17 23:33:53.468165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.847 [2024-11-17 23:33:53.468271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:29.847 [2024-11-17 23:33:53.468284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.628 ms 00:28:29.847 [2024-11-17 23:33:53.468292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.847 [2024-11-17 23:33:53.539345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.847 [2024-11-17 23:33:53.539391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:28:29.847 [2024-11-17 23:33:53.539407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.030 ms 00:28:29.847 [2024-11-17 23:33:53.539419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.847 [2024-11-17 23:33:53.539604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.847 [2024-11-17 23:33:53.539624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:29.847 [2024-11-17 23:33:53.539639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:28:29.847 [2024-11-17 23:33:53.539650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.847 [2024-11-17 23:33:53.543994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.847 [2024-11-17 23:33:53.544039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:28:29.847 [2024-11-17 23:33:53.544054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.324 ms 00:28:29.847 [2024-11-17 23:33:53.544065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.847 [2024-11-17 23:33:53.547531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.847 [2024-11-17 23:33:53.547567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:28:29.847 [2024-11-17 23:33:53.547578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.425 ms 00:28:29.847 [2024-11-17 23:33:53.547588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.847 [2024-11-17 23:33:53.547922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.847 [2024-11-17 23:33:53.547936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:29.847 [2024-11-17 23:33:53.547946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:28:29.847 [2024-11-17 23:33:53.547961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.847 [2024-11-17 23:33:53.579944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.847 [2024-11-17 23:33:53.579990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:28:29.847 [2024-11-17 23:33:53.580006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.963 ms 00:28:29.847 [2024-11-17 23:33:53.580017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.847 [2024-11-17 23:33:53.585388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.847 [2024-11-17 23:33:53.585547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:28:29.847 [2024-11-17 23:33:53.585564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.319 ms 00:28:29.847 [2024-11-17 23:33:53.585575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.847 [2024-11-17 23:33:53.589619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.847 [2024-11-17 23:33:53.589657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:28:29.847 [2024-11-17 23:33:53.589666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.011 ms 00:28:29.847 [2024-11-17 23:33:53.589675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.847 [2024-11-17 23:33:53.594059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.847 [2024-11-17 23:33:53.594096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:29.847 [2024-11-17 23:33:53.594106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.351 ms 00:28:29.847 [2024-11-17 23:33:53.594117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.847 [2024-11-17 23:33:53.594155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.847 [2024-11-17 23:33:53.594167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:29.847 [2024-11-17 23:33:53.594181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:29.847 [2024-11-17 23:33:53.594191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.847 [2024-11-17 23:33:53.594259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.847 [2024-11-17 23:33:53.594271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:29.847 [2024-11-17 23:33:53.594280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:28:29.847 [2024-11-17 23:33:53.594292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.847 [2024-11-17 23:33:53.595258] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4191.767 ms, result 0 00:28:29.847 { 00:28:29.847 "name": "ftl0", 00:28:29.847 "uuid": "86b6b1dc-b32b-40df-8af2-a502fed8f36d" 00:28:29.847 } 00:28:29.847 23:33:53 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:28:29.847 23:33:53 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:28:30.106 23:33:53 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:28:30.106 23:33:53 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:30.368 [2024-11-17 23:33:54.024366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.368 [2024-11-17 23:33:54.024621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:30.368 [2024-11-17 23:33:54.024661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:30.368 [2024-11-17 23:33:54.024672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.368 [2024-11-17 23:33:54.024719] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:30.368 [2024-11-17 23:33:54.025714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.368 [2024-11-17 23:33:54.025779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:30.368 [2024-11-17 23:33:54.025794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.973 ms 00:28:30.368 [2024-11-17 23:33:54.025807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.368 [2024-11-17 23:33:54.026118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.368 [2024-11-17 23:33:54.026136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:30.368 [2024-11-17 23:33:54.026156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:28:30.368 [2024-11-17 23:33:54.026173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.368 [2024-11-17 23:33:54.029436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.368 [2024-11-17 23:33:54.029470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:30.368 [2024-11-17 23:33:54.029481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.246 ms 00:28:30.368 [2024-11-17 23:33:54.029493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.368 [2024-11-17 23:33:54.035984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.368 [2024-11-17 23:33:54.036031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:30.368 [2024-11-17 23:33:54.036044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.472 ms 00:28:30.368 [2024-11-17 23:33:54.036060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.368 [2024-11-17 23:33:54.039295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.368 [2024-11-17 23:33:54.039492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:30.368 [2024-11-17 23:33:54.039512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.139 ms 00:28:30.368 [2024-11-17 23:33:54.039523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.368 [2024-11-17 23:33:54.046664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.368 [2024-11-17 23:33:54.046853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:30.368 [2024-11-17 23:33:54.046873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.020 ms 00:28:30.368 [2024-11-17 23:33:54.046916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.368 [2024-11-17 23:33:54.047069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.368 [2024-11-17 23:33:54.047088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:30.368 [2024-11-17 23:33:54.047102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:28:30.368 [2024-11-17 23:33:54.047113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.368 [2024-11-17 23:33:54.050117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.368 [2024-11-17 23:33:54.050172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:30.368 [2024-11-17 23:33:54.050181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.985 ms 00:28:30.368 [2024-11-17 23:33:54.050192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.368 [2024-11-17 23:33:54.052845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.368 [2024-11-17 23:33:54.052910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:30.368 [2024-11-17 23:33:54.052920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.609 ms 00:28:30.368 [2024-11-17 23:33:54.052930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.368 [2024-11-17 23:33:54.055429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.368 [2024-11-17 23:33:54.055488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:30.368 [2024-11-17 23:33:54.055508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.454 ms 00:28:30.368 [2024-11-17 23:33:54.055522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.368 [2024-11-17 23:33:54.057655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.368 [2024-11-17 23:33:54.057708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:30.368 [2024-11-17 23:33:54.057719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.058 ms 00:28:30.369 [2024-11-17 23:33:54.057729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.369 [2024-11-17 23:33:54.057772] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:30.369 [2024-11-17 23:33:54.057793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.057805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.057817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.057825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.057840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.057848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.057858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.057865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.057876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.057902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.057913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.057921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.057933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.057941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.057952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.057961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.057971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.057979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.057990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.057997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:30.369 [2024-11-17 23:33:54.058653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:30.370 [2024-11-17 23:33:54.058661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:30.370 [2024-11-17 23:33:54.058671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:30.370 [2024-11-17 23:33:54.058679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:30.370 [2024-11-17 23:33:54.058690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:30.370 [2024-11-17 23:33:54.058701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:30.370 [2024-11-17 23:33:54.058711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:30.370 [2024-11-17 23:33:54.058720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:30.370 [2024-11-17 23:33:54.058730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:30.370 [2024-11-17 23:33:54.058738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:30.370 [2024-11-17 23:33:54.058749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:30.370 [2024-11-17 23:33:54.058758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:30.370 [2024-11-17 23:33:54.058768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:30.370 [2024-11-17 23:33:54.058776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:30.370 [2024-11-17 23:33:54.058798] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:30.370 [2024-11-17 23:33:54.058807] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 86b6b1dc-b32b-40df-8af2-a502fed8f36d 00:28:30.370 [2024-11-17 23:33:54.058819] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:30.370 [2024-11-17 23:33:54.058827] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:30.370 [2024-11-17 23:33:54.058837] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:30.370 [2024-11-17 23:33:54.058847] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:30.370 [2024-11-17 23:33:54.058856] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:30.370 [2024-11-17 23:33:54.058868] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:30.370 [2024-11-17 23:33:54.058890] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:30.370 [2024-11-17 23:33:54.058899] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:30.370 [2024-11-17 23:33:54.058908] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:30.370 [2024-11-17 23:33:54.058916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.370 [2024-11-17 23:33:54.058928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:30.370 [2024-11-17 23:33:54.058938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.147 ms 00:28:30.370 [2024-11-17 23:33:54.058948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.370 [2024-11-17 23:33:54.062022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.370 [2024-11-17 23:33:54.062063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:30.370 [2024-11-17 23:33:54.062073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.053 ms 00:28:30.370 [2024-11-17 23:33:54.062089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.370 [2024-11-17 23:33:54.062245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.370 [2024-11-17 23:33:54.062259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:30.370 [2024-11-17 23:33:54.062278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:28:30.370 [2024-11-17 23:33:54.062288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.370 [2024-11-17 23:33:54.073103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:30.370 [2024-11-17 23:33:54.073157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:30.370 [2024-11-17 23:33:54.073173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:30.370 [2024-11-17 23:33:54.073184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.370 [2024-11-17 23:33:54.073262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:30.370 [2024-11-17 23:33:54.073274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:30.370 [2024-11-17 23:33:54.073282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:30.370 [2024-11-17 23:33:54.073293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.370 [2024-11-17 23:33:54.073379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:30.370 [2024-11-17 23:33:54.073400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:30.370 [2024-11-17 23:33:54.073409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:30.370 [2024-11-17 23:33:54.073424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.370 [2024-11-17 23:33:54.073445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:30.370 [2024-11-17 23:33:54.073462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:30.370 [2024-11-17 23:33:54.073471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:30.370 [2024-11-17 23:33:54.073482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.370 [2024-11-17 23:33:54.093476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:30.370 [2024-11-17 23:33:54.093542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:30.370 [2024-11-17 23:33:54.093558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:30.370 [2024-11-17 23:33:54.093573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.370 [2024-11-17 23:33:54.109607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:30.370 [2024-11-17 23:33:54.109685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:30.370 [2024-11-17 23:33:54.109699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:30.370 [2024-11-17 23:33:54.109711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.370 [2024-11-17 23:33:54.109814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:30.370 [2024-11-17 23:33:54.109833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:30.370 [2024-11-17 23:33:54.109841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:30.370 [2024-11-17 23:33:54.109853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.370 [2024-11-17 23:33:54.109960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:30.370 [2024-11-17 23:33:54.109976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:30.370 [2024-11-17 23:33:54.109992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:30.370 [2024-11-17 23:33:54.110004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.370 [2024-11-17 23:33:54.110103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:30.370 [2024-11-17 23:33:54.110120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:30.370 [2024-11-17 23:33:54.110130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:30.370 [2024-11-17 23:33:54.110141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.370 [2024-11-17 23:33:54.110185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:30.370 [2024-11-17 23:33:54.110207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:30.370 [2024-11-17 23:33:54.110216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:30.370 [2024-11-17 23:33:54.110233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.370 [2024-11-17 23:33:54.110291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:30.370 [2024-11-17 23:33:54.110312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:30.370 [2024-11-17 23:33:54.110326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:30.370 [2024-11-17 23:33:54.110338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.370 [2024-11-17 23:33:54.110404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:30.370 [2024-11-17 23:33:54.110419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:30.370 [2024-11-17 23:33:54.110429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:30.370 [2024-11-17 23:33:54.110441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.370 [2024-11-17 23:33:54.110628] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 86.207 ms, result 0 00:28:30.370 true 00:28:30.370 23:33:54 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 92510 00:28:30.370 23:33:54 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 92510 ']' 00:28:30.370 23:33:54 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 92510 00:28:30.370 23:33:54 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:28:30.370 23:33:54 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:30.370 23:33:54 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92510 00:28:30.370 killing process with pid 92510 00:28:30.370 23:33:54 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:30.370 23:33:54 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:30.370 23:33:54 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92510' 00:28:30.370 23:33:54 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 92510 00:28:30.370 23:33:54 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 92510 00:28:35.697 23:33:59 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:28:38.993 262144+0 records in 00:28:38.993 262144+0 records out 00:28:38.993 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.26456 s, 329 MB/s 00:28:38.993 23:34:02 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:40.901 23:34:04 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:40.901 [2024-11-17 23:34:04.450505] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:28:40.901 [2024-11-17 23:34:04.450597] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92713 ] 00:28:40.901 [2024-11-17 23:34:04.590484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.901 [2024-11-17 23:34:04.613836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:40.901 [2024-11-17 23:34:04.712260] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:40.901 [2024-11-17 23:34:04.712464] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:41.161 [2024-11-17 23:34:04.866163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.161 [2024-11-17 23:34:04.866197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:41.161 [2024-11-17 23:34:04.866208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:41.161 [2024-11-17 23:34:04.866214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.161 [2024-11-17 23:34:04.866249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.161 [2024-11-17 23:34:04.866257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:41.161 [2024-11-17 23:34:04.866263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:28:41.161 [2024-11-17 23:34:04.866269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.161 [2024-11-17 23:34:04.866286] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:41.161 [2024-11-17 23:34:04.866464] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:41.161 [2024-11-17 23:34:04.866478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.161 [2024-11-17 23:34:04.866484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:41.161 [2024-11-17 23:34:04.866490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:28:41.161 [2024-11-17 23:34:04.866498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.161 [2024-11-17 23:34:04.867735] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:41.161 [2024-11-17 23:34:04.870472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.161 [2024-11-17 23:34:04.870591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:41.161 [2024-11-17 23:34:04.870604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.738 ms 00:28:41.161 [2024-11-17 23:34:04.870617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.161 [2024-11-17 23:34:04.870661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.161 [2024-11-17 23:34:04.870671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:41.161 [2024-11-17 23:34:04.870677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:28:41.161 [2024-11-17 23:34:04.870683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.161 [2024-11-17 23:34:04.876897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.161 [2024-11-17 23:34:04.876997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:41.161 [2024-11-17 23:34:04.877009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.175 ms 00:28:41.161 [2024-11-17 23:34:04.877022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.161 [2024-11-17 23:34:04.877087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.161 [2024-11-17 23:34:04.877095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:41.161 [2024-11-17 23:34:04.877101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:28:41.161 [2024-11-17 23:34:04.877107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.161 [2024-11-17 23:34:04.877146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.161 [2024-11-17 23:34:04.877154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:41.161 [2024-11-17 23:34:04.877160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:41.161 [2024-11-17 23:34:04.877166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.161 [2024-11-17 23:34:04.877189] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:41.161 [2024-11-17 23:34:04.878727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.161 [2024-11-17 23:34:04.878751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:41.161 [2024-11-17 23:34:04.878759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.542 ms 00:28:41.162 [2024-11-17 23:34:04.878765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.162 [2024-11-17 23:34:04.878787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.162 [2024-11-17 23:34:04.878794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:41.162 [2024-11-17 23:34:04.878800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:41.162 [2024-11-17 23:34:04.878805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.162 [2024-11-17 23:34:04.878826] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:41.162 [2024-11-17 23:34:04.878844] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:41.162 [2024-11-17 23:34:04.878875] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:41.162 [2024-11-17 23:34:04.878901] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:41.162 [2024-11-17 23:34:04.878983] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:41.162 [2024-11-17 23:34:04.878992] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:41.162 [2024-11-17 23:34:04.879000] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:41.162 [2024-11-17 23:34:04.879011] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:41.162 [2024-11-17 23:34:04.879020] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:41.162 [2024-11-17 23:34:04.879026] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:41.162 [2024-11-17 23:34:04.879032] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:41.162 [2024-11-17 23:34:04.879038] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:41.162 [2024-11-17 23:34:04.879043] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:41.162 [2024-11-17 23:34:04.879050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.162 [2024-11-17 23:34:04.879055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:41.162 [2024-11-17 23:34:04.879061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:28:41.162 [2024-11-17 23:34:04.879069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.162 [2024-11-17 23:34:04.879133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.162 [2024-11-17 23:34:04.879142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:41.162 [2024-11-17 23:34:04.879148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:28:41.162 [2024-11-17 23:34:04.879153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.162 [2024-11-17 23:34:04.879231] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:41.162 [2024-11-17 23:34:04.879239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:41.162 [2024-11-17 23:34:04.879245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:41.162 [2024-11-17 23:34:04.879251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:41.162 [2024-11-17 23:34:04.879260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:41.162 [2024-11-17 23:34:04.879265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:41.162 [2024-11-17 23:34:04.879271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:41.162 [2024-11-17 23:34:04.879278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:41.162 [2024-11-17 23:34:04.879284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:41.162 [2024-11-17 23:34:04.879289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:41.162 [2024-11-17 23:34:04.879295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:41.162 [2024-11-17 23:34:04.879302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:41.162 [2024-11-17 23:34:04.879307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:41.162 [2024-11-17 23:34:04.879312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:41.162 [2024-11-17 23:34:04.879320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:41.162 [2024-11-17 23:34:04.879325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:41.162 [2024-11-17 23:34:04.879330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:41.162 [2024-11-17 23:34:04.879336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:41.162 [2024-11-17 23:34:04.879341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:41.162 [2024-11-17 23:34:04.879347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:41.162 [2024-11-17 23:34:04.879352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:41.162 [2024-11-17 23:34:04.879357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:41.162 [2024-11-17 23:34:04.879362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:41.162 [2024-11-17 23:34:04.879367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:41.162 [2024-11-17 23:34:04.879371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:41.162 [2024-11-17 23:34:04.879376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:41.162 [2024-11-17 23:34:04.879383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:41.162 [2024-11-17 23:34:04.879392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:41.162 [2024-11-17 23:34:04.879397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:41.162 [2024-11-17 23:34:04.879403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:41.162 [2024-11-17 23:34:04.879409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:41.162 [2024-11-17 23:34:04.879414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:41.162 [2024-11-17 23:34:04.879420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:41.162 [2024-11-17 23:34:04.879426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:41.162 [2024-11-17 23:34:04.879432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:41.162 [2024-11-17 23:34:04.879438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:41.162 [2024-11-17 23:34:04.879443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:41.162 [2024-11-17 23:34:04.879449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:41.162 [2024-11-17 23:34:04.879454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:41.162 [2024-11-17 23:34:04.879461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:41.162 [2024-11-17 23:34:04.879467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:41.162 [2024-11-17 23:34:04.879472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:41.162 [2024-11-17 23:34:04.879478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:41.162 [2024-11-17 23:34:04.879487] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:41.162 [2024-11-17 23:34:04.879494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:41.162 [2024-11-17 23:34:04.879502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:41.162 [2024-11-17 23:34:04.879510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:41.162 [2024-11-17 23:34:04.879516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:41.162 [2024-11-17 23:34:04.879522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:41.162 [2024-11-17 23:34:04.879527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:41.162 [2024-11-17 23:34:04.879533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:41.162 [2024-11-17 23:34:04.879539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:41.162 [2024-11-17 23:34:04.879545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:41.162 [2024-11-17 23:34:04.879552] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:41.162 [2024-11-17 23:34:04.879560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:41.162 [2024-11-17 23:34:04.879567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:41.162 [2024-11-17 23:34:04.879574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:41.162 [2024-11-17 23:34:04.879581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:41.162 [2024-11-17 23:34:04.879587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:41.162 [2024-11-17 23:34:04.879594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:41.162 [2024-11-17 23:34:04.879600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:41.162 [2024-11-17 23:34:04.879606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:41.162 [2024-11-17 23:34:04.879613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:41.162 [2024-11-17 23:34:04.879619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:41.162 [2024-11-17 23:34:04.879625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:41.162 [2024-11-17 23:34:04.879631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:41.162 [2024-11-17 23:34:04.879641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:41.162 [2024-11-17 23:34:04.879647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:41.162 [2024-11-17 23:34:04.879654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:41.162 [2024-11-17 23:34:04.879660] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:41.163 [2024-11-17 23:34:04.879667] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:41.163 [2024-11-17 23:34:04.879674] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:41.163 [2024-11-17 23:34:04.879680] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:41.163 [2024-11-17 23:34:04.879687] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:41.163 [2024-11-17 23:34:04.879693] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:41.163 [2024-11-17 23:34:04.879701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.163 [2024-11-17 23:34:04.879708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:41.163 [2024-11-17 23:34:04.879715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.521 ms 00:28:41.163 [2024-11-17 23:34:04.879725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.163 [2024-11-17 23:34:04.890691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.163 [2024-11-17 23:34:04.890719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:41.163 [2024-11-17 23:34:04.890731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.931 ms 00:28:41.163 [2024-11-17 23:34:04.890737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.163 [2024-11-17 23:34:04.890803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.163 [2024-11-17 23:34:04.890812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:41.163 [2024-11-17 23:34:04.890818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:28:41.163 [2024-11-17 23:34:04.890823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.163 [2024-11-17 23:34:04.910771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.163 [2024-11-17 23:34:04.910929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:41.163 [2024-11-17 23:34:04.910947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.908 ms 00:28:41.163 [2024-11-17 23:34:04.910962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.163 [2024-11-17 23:34:04.911005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.163 [2024-11-17 23:34:04.911014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:41.163 [2024-11-17 23:34:04.911023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:41.163 [2024-11-17 23:34:04.911031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.163 [2024-11-17 23:34:04.911461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.163 [2024-11-17 23:34:04.911478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:41.163 [2024-11-17 23:34:04.911489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.376 ms 00:28:41.163 [2024-11-17 23:34:04.911497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.163 [2024-11-17 23:34:04.911639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.163 [2024-11-17 23:34:04.911650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:41.163 [2024-11-17 23:34:04.911659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:28:41.163 [2024-11-17 23:34:04.911671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.163 [2024-11-17 23:34:04.918528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.163 [2024-11-17 23:34:04.918664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:41.163 [2024-11-17 23:34:04.918683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.835 ms 00:28:41.163 [2024-11-17 23:34:04.918699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.163 [2024-11-17 23:34:04.922032] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:41.163 [2024-11-17 23:34:04.922077] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:41.163 [2024-11-17 23:34:04.922095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.163 [2024-11-17 23:34:04.922105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:41.163 [2024-11-17 23:34:04.922116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.298 ms 00:28:41.163 [2024-11-17 23:34:04.922124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.163 [2024-11-17 23:34:04.933951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.163 [2024-11-17 23:34:04.933982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:41.163 [2024-11-17 23:34:04.933994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.781 ms 00:28:41.163 [2024-11-17 23:34:04.934000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.163 [2024-11-17 23:34:04.935778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.163 [2024-11-17 23:34:04.935804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:41.163 [2024-11-17 23:34:04.935812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.747 ms 00:28:41.163 [2024-11-17 23:34:04.935818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.163 [2024-11-17 23:34:04.937571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.163 [2024-11-17 23:34:04.937666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:41.163 [2024-11-17 23:34:04.937677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.728 ms 00:28:41.163 [2024-11-17 23:34:04.937684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.163 [2024-11-17 23:34:04.937952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.163 [2024-11-17 23:34:04.937962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:41.163 [2024-11-17 23:34:04.937970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:28:41.163 [2024-11-17 23:34:04.937976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.163 [2024-11-17 23:34:04.955598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.163 [2024-11-17 23:34:04.955708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:41.163 [2024-11-17 23:34:04.955730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.609 ms 00:28:41.163 [2024-11-17 23:34:04.955737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.163 [2024-11-17 23:34:04.961547] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:41.163 [2024-11-17 23:34:04.964022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.163 [2024-11-17 23:34:04.964045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:41.163 [2024-11-17 23:34:04.964054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.257 ms 00:28:41.163 [2024-11-17 23:34:04.964070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.163 [2024-11-17 23:34:04.964109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.163 [2024-11-17 23:34:04.964118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:41.163 [2024-11-17 23:34:04.964125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:41.163 [2024-11-17 23:34:04.964131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.163 [2024-11-17 23:34:04.964219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.163 [2024-11-17 23:34:04.964233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:41.163 [2024-11-17 23:34:04.964240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:28:41.163 [2024-11-17 23:34:04.964245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.163 [2024-11-17 23:34:04.964267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.163 [2024-11-17 23:34:04.964274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:41.163 [2024-11-17 23:34:04.964280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:41.163 [2024-11-17 23:34:04.964287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.163 [2024-11-17 23:34:04.964318] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:41.163 [2024-11-17 23:34:04.964329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.163 [2024-11-17 23:34:04.964337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:41.163 [2024-11-17 23:34:04.964343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:41.163 [2024-11-17 23:34:04.964349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.163 [2024-11-17 23:34:04.968253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.163 [2024-11-17 23:34:04.968279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:41.163 [2024-11-17 23:34:04.968292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.889 ms 00:28:41.163 [2024-11-17 23:34:04.968301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.163 [2024-11-17 23:34:04.968356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:41.163 [2024-11-17 23:34:04.968364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:41.163 [2024-11-17 23:34:04.968373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:28:41.163 [2024-11-17 23:34:04.968380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.163 [2024-11-17 23:34:04.969526] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 102.718 ms, result 0 00:28:42.545  [2024-11-17T23:34:07.307Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-17T23:34:08.251Z] Copying: 29/1024 [MB] (12 MBps) [2024-11-17T23:34:09.186Z] Copying: 49/1024 [MB] (19 MBps) [2024-11-17T23:34:10.124Z] Copying: 73/1024 [MB] (24 MBps) [2024-11-17T23:34:11.112Z] Copying: 92/1024 [MB] (19 MBps) [2024-11-17T23:34:12.061Z] Copying: 110/1024 [MB] (18 MBps) [2024-11-17T23:34:13.004Z] Copying: 135/1024 [MB] (24 MBps) [2024-11-17T23:34:14.379Z] Copying: 150/1024 [MB] (15 MBps) [2024-11-17T23:34:15.335Z] Copying: 163/1024 [MB] (12 MBps) [2024-11-17T23:34:16.271Z] Copying: 181/1024 [MB] (17 MBps) [2024-11-17T23:34:17.208Z] Copying: 192/1024 [MB] (11 MBps) [2024-11-17T23:34:18.144Z] Copying: 204/1024 [MB] (12 MBps) [2024-11-17T23:34:19.078Z] Copying: 223/1024 [MB] (18 MBps) [2024-11-17T23:34:20.018Z] Copying: 236/1024 [MB] (12 MBps) [2024-11-17T23:34:21.400Z] Copying: 248/1024 [MB] (12 MBps) [2024-11-17T23:34:22.335Z] Copying: 262/1024 [MB] (13 MBps) [2024-11-17T23:34:23.270Z] Copying: 273/1024 [MB] (11 MBps) [2024-11-17T23:34:24.205Z] Copying: 285/1024 [MB] (11 MBps) [2024-11-17T23:34:25.140Z] Copying: 297/1024 [MB] (11 MBps) [2024-11-17T23:34:26.074Z] Copying: 308/1024 [MB] (11 MBps) [2024-11-17T23:34:27.009Z] Copying: 320/1024 [MB] (11 MBps) [2024-11-17T23:34:28.385Z] Copying: 332/1024 [MB] (11 MBps) [2024-11-17T23:34:29.328Z] Copying: 344/1024 [MB] (11 MBps) [2024-11-17T23:34:30.265Z] Copying: 359/1024 [MB] (15 MBps) [2024-11-17T23:34:31.221Z] Copying: 375/1024 [MB] (15 MBps) [2024-11-17T23:34:32.159Z] Copying: 387/1024 [MB] (11 MBps) [2024-11-17T23:34:33.093Z] Copying: 397/1024 [MB] (10 MBps) [2024-11-17T23:34:34.130Z] Copying: 410/1024 [MB] (13 MBps) [2024-11-17T23:34:35.067Z] Copying: 427/1024 [MB] (16 MBps) [2024-11-17T23:34:36.009Z] Copying: 445/1024 [MB] (18 MBps) [2024-11-17T23:34:37.393Z] Copying: 456/1024 [MB] (11 MBps) [2024-11-17T23:34:38.334Z] Copying: 480/1024 [MB] (23 MBps) [2024-11-17T23:34:39.274Z] Copying: 508/1024 [MB] (27 MBps) [2024-11-17T23:34:40.214Z] Copying: 524/1024 [MB] (15 MBps) [2024-11-17T23:34:41.175Z] Copying: 551/1024 [MB] (27 MBps) [2024-11-17T23:34:42.117Z] Copying: 572/1024 [MB] (21 MBps) [2024-11-17T23:34:43.171Z] Copying: 596/1024 [MB] (23 MBps) [2024-11-17T23:34:44.112Z] Copying: 620/1024 [MB] (24 MBps) [2024-11-17T23:34:45.052Z] Copying: 641/1024 [MB] (20 MBps) [2024-11-17T23:34:45.995Z] Copying: 661/1024 [MB] (20 MBps) [2024-11-17T23:34:47.380Z] Copying: 683/1024 [MB] (21 MBps) [2024-11-17T23:34:48.328Z] Copying: 705/1024 [MB] (21 MBps) [2024-11-17T23:34:49.275Z] Copying: 722/1024 [MB] (17 MBps) [2024-11-17T23:34:50.220Z] Copying: 749/1024 [MB] (26 MBps) [2024-11-17T23:34:51.164Z] Copying: 776/1024 [MB] (27 MBps) [2024-11-17T23:34:52.108Z] Copying: 802/1024 [MB] (25 MBps) [2024-11-17T23:34:53.053Z] Copying: 839/1024 [MB] (37 MBps) [2024-11-17T23:34:53.999Z] Copying: 853/1024 [MB] (14 MBps) [2024-11-17T23:34:55.386Z] Copying: 867/1024 [MB] (13 MBps) [2024-11-17T23:34:56.330Z] Copying: 883/1024 [MB] (16 MBps) [2024-11-17T23:34:57.274Z] Copying: 901/1024 [MB] (18 MBps) [2024-11-17T23:34:58.218Z] Copying: 916/1024 [MB] (14 MBps) [2024-11-17T23:34:59.169Z] Copying: 927/1024 [MB] (11 MBps) [2024-11-17T23:35:00.111Z] Copying: 944/1024 [MB] (17 MBps) [2024-11-17T23:35:01.057Z] Copying: 966/1024 [MB] (21 MBps) [2024-11-17T23:35:02.004Z] Copying: 986/1024 [MB] (20 MBps) [2024-11-17T23:35:03.394Z] Copying: 999/1024 [MB] (12 MBps) [2024-11-17T23:35:04.349Z] Copying: 1010/1024 [MB] (11 MBps) [2024-11-17T23:35:04.349Z] Copying: 1020/1024 [MB] (10 MBps) [2024-11-17T23:35:04.349Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-17 23:35:04.298351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.528 [2024-11-17 23:35:04.298416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:40.528 [2024-11-17 23:35:04.298432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:40.528 [2024-11-17 23:35:04.298442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.529 [2024-11-17 23:35:04.298467] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:40.529 [2024-11-17 23:35:04.299289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.529 [2024-11-17 23:35:04.299318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:40.529 [2024-11-17 23:35:04.299330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.805 ms 00:29:40.529 [2024-11-17 23:35:04.299339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.529 [2024-11-17 23:35:04.302184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.529 [2024-11-17 23:35:04.302364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:40.529 [2024-11-17 23:35:04.302388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.819 ms 00:29:40.529 [2024-11-17 23:35:04.302396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.529 [2024-11-17 23:35:04.302437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.529 [2024-11-17 23:35:04.302446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:29:40.529 [2024-11-17 23:35:04.302455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:40.529 [2024-11-17 23:35:04.302463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.529 [2024-11-17 23:35:04.302531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.529 [2024-11-17 23:35:04.302541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:29:40.529 [2024-11-17 23:35:04.302554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:29:40.529 [2024-11-17 23:35:04.302562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.529 [2024-11-17 23:35:04.302575] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:40.529 [2024-11-17 23:35:04.302589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.302999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.303006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.303016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.303023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.303031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.303039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.303046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.303054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.303061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.303068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.303075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.303083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.303089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.303097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.303104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.303112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.303119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.303126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.303133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.303140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:40.529 [2024-11-17 23:35:04.303147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:40.530 [2024-11-17 23:35:04.303164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:40.530 [2024-11-17 23:35:04.303171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:40.530 [2024-11-17 23:35:04.303178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:40.530 [2024-11-17 23:35:04.303185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:40.530 [2024-11-17 23:35:04.303192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:40.530 [2024-11-17 23:35:04.303199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:40.530 [2024-11-17 23:35:04.303207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:40.530 [2024-11-17 23:35:04.303215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:40.530 [2024-11-17 23:35:04.303222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:40.530 [2024-11-17 23:35:04.303230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:40.530 [2024-11-17 23:35:04.303237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:40.530 [2024-11-17 23:35:04.303245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:40.530 [2024-11-17 23:35:04.303252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:40.530 [2024-11-17 23:35:04.303260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:40.530 [2024-11-17 23:35:04.303268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:40.530 [2024-11-17 23:35:04.303275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:40.530 [2024-11-17 23:35:04.303283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:40.530 [2024-11-17 23:35:04.303290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:40.530 [2024-11-17 23:35:04.303297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:40.530 [2024-11-17 23:35:04.303304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:40.530 [2024-11-17 23:35:04.303312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:40.530 [2024-11-17 23:35:04.303319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:40.530 [2024-11-17 23:35:04.303327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:40.530 [2024-11-17 23:35:04.303334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:40.530 [2024-11-17 23:35:04.303342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:40.530 [2024-11-17 23:35:04.303350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:40.530 [2024-11-17 23:35:04.303357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:40.530 [2024-11-17 23:35:04.303373] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:40.530 [2024-11-17 23:35:04.303384] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 86b6b1dc-b32b-40df-8af2-a502fed8f36d 00:29:40.530 [2024-11-17 23:35:04.303392] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:40.530 [2024-11-17 23:35:04.303402] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:29:40.530 [2024-11-17 23:35:04.303409] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:40.530 [2024-11-17 23:35:04.303416] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:40.530 [2024-11-17 23:35:04.303424] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:40.530 [2024-11-17 23:35:04.303432] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:40.530 [2024-11-17 23:35:04.303440] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:40.530 [2024-11-17 23:35:04.303446] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:40.530 [2024-11-17 23:35:04.303453] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:40.530 [2024-11-17 23:35:04.303460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.530 [2024-11-17 23:35:04.303468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:40.530 [2024-11-17 23:35:04.303476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.885 ms 00:29:40.530 [2024-11-17 23:35:04.303483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.530 [2024-11-17 23:35:04.305848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.530 [2024-11-17 23:35:04.306032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:40.530 [2024-11-17 23:35:04.306051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.347 ms 00:29:40.530 [2024-11-17 23:35:04.306059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.530 [2024-11-17 23:35:04.306192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.530 [2024-11-17 23:35:04.306201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:40.530 [2024-11-17 23:35:04.306214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:29:40.530 [2024-11-17 23:35:04.306221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.530 [2024-11-17 23:35:04.313707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.530 [2024-11-17 23:35:04.313913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:40.530 [2024-11-17 23:35:04.313940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.530 [2024-11-17 23:35:04.313948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.530 [2024-11-17 23:35:04.314009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.530 [2024-11-17 23:35:04.314018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:40.530 [2024-11-17 23:35:04.314030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.530 [2024-11-17 23:35:04.314038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.530 [2024-11-17 23:35:04.314087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.530 [2024-11-17 23:35:04.314097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:40.530 [2024-11-17 23:35:04.314105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.530 [2024-11-17 23:35:04.314112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.530 [2024-11-17 23:35:04.314127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.530 [2024-11-17 23:35:04.314136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:40.530 [2024-11-17 23:35:04.314144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.530 [2024-11-17 23:35:04.314153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.530 [2024-11-17 23:35:04.327608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.530 [2024-11-17 23:35:04.327663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:40.530 [2024-11-17 23:35:04.327674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.530 [2024-11-17 23:35:04.327682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.530 [2024-11-17 23:35:04.337691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.530 [2024-11-17 23:35:04.337935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:40.530 [2024-11-17 23:35:04.337953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.530 [2024-11-17 23:35:04.337968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.530 [2024-11-17 23:35:04.338023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.530 [2024-11-17 23:35:04.338033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:40.530 [2024-11-17 23:35:04.338041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.530 [2024-11-17 23:35:04.338053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.530 [2024-11-17 23:35:04.338089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.530 [2024-11-17 23:35:04.338097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:40.530 [2024-11-17 23:35:04.338106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.530 [2024-11-17 23:35:04.338113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.530 [2024-11-17 23:35:04.338177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.530 [2024-11-17 23:35:04.338187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:40.530 [2024-11-17 23:35:04.338196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.530 [2024-11-17 23:35:04.338208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.530 [2024-11-17 23:35:04.338237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.530 [2024-11-17 23:35:04.338247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:40.530 [2024-11-17 23:35:04.338259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.530 [2024-11-17 23:35:04.338267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.530 [2024-11-17 23:35:04.338309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.530 [2024-11-17 23:35:04.338319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:40.530 [2024-11-17 23:35:04.338327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.530 [2024-11-17 23:35:04.338335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.530 [2024-11-17 23:35:04.338385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:40.530 [2024-11-17 23:35:04.338398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:40.530 [2024-11-17 23:35:04.338407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:40.530 [2024-11-17 23:35:04.338416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.530 [2024-11-17 23:35:04.338549] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 40.162 ms, result 0 00:29:40.792 00:29:40.792 00:29:40.792 23:35:04 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:29:41.053 [2024-11-17 23:35:04.615846] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:29:41.053 [2024-11-17 23:35:04.616021] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93324 ] 00:29:41.053 [2024-11-17 23:35:04.766420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:41.053 [2024-11-17 23:35:04.794475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:41.317 [2024-11-17 23:35:04.902683] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:41.317 [2024-11-17 23:35:04.902759] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:41.317 [2024-11-17 23:35:05.063320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.317 [2024-11-17 23:35:05.063381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:41.317 [2024-11-17 23:35:05.063396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:41.317 [2024-11-17 23:35:05.063409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.317 [2024-11-17 23:35:05.063472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.317 [2024-11-17 23:35:05.063483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:41.317 [2024-11-17 23:35:05.063492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:29:41.317 [2024-11-17 23:35:05.063504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.317 [2024-11-17 23:35:05.063528] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:41.317 [2024-11-17 23:35:05.063804] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:41.317 [2024-11-17 23:35:05.063823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.317 [2024-11-17 23:35:05.063831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:41.317 [2024-11-17 23:35:05.063840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:29:41.317 [2024-11-17 23:35:05.063851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.317 [2024-11-17 23:35:05.064185] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:29:41.317 [2024-11-17 23:35:05.064214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.317 [2024-11-17 23:35:05.064223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:41.317 [2024-11-17 23:35:05.064232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:29:41.317 [2024-11-17 23:35:05.064240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.317 [2024-11-17 23:35:05.064298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.317 [2024-11-17 23:35:05.064310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:41.317 [2024-11-17 23:35:05.064318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:29:41.317 [2024-11-17 23:35:05.064332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.317 [2024-11-17 23:35:05.064967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.317 [2024-11-17 23:35:05.065003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:41.317 [2024-11-17 23:35:05.065016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:29:41.317 [2024-11-17 23:35:05.065025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.317 [2024-11-17 23:35:05.065118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.317 [2024-11-17 23:35:05.065129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:41.317 [2024-11-17 23:35:05.065138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:29:41.317 [2024-11-17 23:35:05.065145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.317 [2024-11-17 23:35:05.065171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.317 [2024-11-17 23:35:05.065181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:41.317 [2024-11-17 23:35:05.065189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:41.317 [2024-11-17 23:35:05.065197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.317 [2024-11-17 23:35:05.065219] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:41.317 [2024-11-17 23:35:05.067350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.317 [2024-11-17 23:35:05.067524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:41.317 [2024-11-17 23:35:05.067543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.132 ms 00:29:41.317 [2024-11-17 23:35:05.067560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.317 [2024-11-17 23:35:05.067598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.317 [2024-11-17 23:35:05.067608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:41.317 [2024-11-17 23:35:05.067617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:41.317 [2024-11-17 23:35:05.067625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.317 [2024-11-17 23:35:05.067687] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:41.317 [2024-11-17 23:35:05.067711] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:41.317 [2024-11-17 23:35:05.067757] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:41.317 [2024-11-17 23:35:05.067775] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:41.317 [2024-11-17 23:35:05.067904] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:41.317 [2024-11-17 23:35:05.067916] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:41.317 [2024-11-17 23:35:05.067926] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:41.317 [2024-11-17 23:35:05.067940] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:41.317 [2024-11-17 23:35:05.067949] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:41.317 [2024-11-17 23:35:05.067961] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:41.317 [2024-11-17 23:35:05.067969] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:41.317 [2024-11-17 23:35:05.067976] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:41.317 [2024-11-17 23:35:05.067984] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:41.317 [2024-11-17 23:35:05.067991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.317 [2024-11-17 23:35:05.067998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:41.317 [2024-11-17 23:35:05.068010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:29:41.317 [2024-11-17 23:35:05.068017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.317 [2024-11-17 23:35:05.068107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.317 [2024-11-17 23:35:05.068116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:41.317 [2024-11-17 23:35:05.068124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:29:41.317 [2024-11-17 23:35:05.068137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.317 [2024-11-17 23:35:05.068241] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:41.317 [2024-11-17 23:35:05.068252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:41.317 [2024-11-17 23:35:05.068263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:41.317 [2024-11-17 23:35:05.068271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:41.317 [2024-11-17 23:35:05.068280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:41.317 [2024-11-17 23:35:05.068287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:41.317 [2024-11-17 23:35:05.068294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:41.317 [2024-11-17 23:35:05.068300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:41.317 [2024-11-17 23:35:05.068308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:41.317 [2024-11-17 23:35:05.068315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:41.317 [2024-11-17 23:35:05.068325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:41.317 [2024-11-17 23:35:05.068332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:41.317 [2024-11-17 23:35:05.068339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:41.317 [2024-11-17 23:35:05.068346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:41.317 [2024-11-17 23:35:05.068354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:41.317 [2024-11-17 23:35:05.068361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:41.317 [2024-11-17 23:35:05.068368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:41.317 [2024-11-17 23:35:05.068375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:41.317 [2024-11-17 23:35:05.068384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:41.317 [2024-11-17 23:35:05.068391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:41.317 [2024-11-17 23:35:05.068397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:41.317 [2024-11-17 23:35:05.068404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:41.317 [2024-11-17 23:35:05.068410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:41.317 [2024-11-17 23:35:05.068416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:41.317 [2024-11-17 23:35:05.068423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:41.317 [2024-11-17 23:35:05.068429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:41.317 [2024-11-17 23:35:05.068435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:41.318 [2024-11-17 23:35:05.068442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:41.318 [2024-11-17 23:35:05.068448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:41.318 [2024-11-17 23:35:05.068455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:41.318 [2024-11-17 23:35:05.068462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:41.318 [2024-11-17 23:35:05.068469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:41.318 [2024-11-17 23:35:05.068475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:41.318 [2024-11-17 23:35:05.068482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:41.318 [2024-11-17 23:35:05.068495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:41.318 [2024-11-17 23:35:05.068502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:41.318 [2024-11-17 23:35:05.068508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:41.318 [2024-11-17 23:35:05.068515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:41.318 [2024-11-17 23:35:05.068521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:41.318 [2024-11-17 23:35:05.068527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:41.318 [2024-11-17 23:35:05.068549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:41.318 [2024-11-17 23:35:05.068556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:41.318 [2024-11-17 23:35:05.068565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:41.318 [2024-11-17 23:35:05.068573] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:41.318 [2024-11-17 23:35:05.068581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:41.318 [2024-11-17 23:35:05.068589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:41.318 [2024-11-17 23:35:05.068596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:41.318 [2024-11-17 23:35:05.068607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:41.318 [2024-11-17 23:35:05.068614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:41.318 [2024-11-17 23:35:05.068621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:41.318 [2024-11-17 23:35:05.068630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:41.318 [2024-11-17 23:35:05.068637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:41.318 [2024-11-17 23:35:05.068644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:41.318 [2024-11-17 23:35:05.068653] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:41.318 [2024-11-17 23:35:05.068663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:41.318 [2024-11-17 23:35:05.068675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:41.318 [2024-11-17 23:35:05.068683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:41.318 [2024-11-17 23:35:05.068690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:41.318 [2024-11-17 23:35:05.068697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:41.318 [2024-11-17 23:35:05.068704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:41.318 [2024-11-17 23:35:05.068711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:41.318 [2024-11-17 23:35:05.068719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:41.318 [2024-11-17 23:35:05.068725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:41.318 [2024-11-17 23:35:05.068732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:41.318 [2024-11-17 23:35:05.068739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:41.318 [2024-11-17 23:35:05.068747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:41.318 [2024-11-17 23:35:05.068757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:41.318 [2024-11-17 23:35:05.068764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:41.318 [2024-11-17 23:35:05.068776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:41.318 [2024-11-17 23:35:05.068783] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:41.318 [2024-11-17 23:35:05.068791] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:41.318 [2024-11-17 23:35:05.068800] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:41.318 [2024-11-17 23:35:05.068808] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:41.318 [2024-11-17 23:35:05.068815] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:41.318 [2024-11-17 23:35:05.068824] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:41.318 [2024-11-17 23:35:05.068831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.318 [2024-11-17 23:35:05.068842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:41.318 [2024-11-17 23:35:05.068850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.661 ms 00:29:41.318 [2024-11-17 23:35:05.068857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.318 [2024-11-17 23:35:05.078839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.318 [2024-11-17 23:35:05.079052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:41.318 [2024-11-17 23:35:05.079076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.927 ms 00:29:41.318 [2024-11-17 23:35:05.079090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.318 [2024-11-17 23:35:05.079180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.318 [2024-11-17 23:35:05.079188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:41.318 [2024-11-17 23:35:05.079197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:29:41.318 [2024-11-17 23:35:05.079204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.318 [2024-11-17 23:35:05.101209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.318 [2024-11-17 23:35:05.101285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:41.318 [2024-11-17 23:35:05.101306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.941 ms 00:29:41.318 [2024-11-17 23:35:05.101321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.318 [2024-11-17 23:35:05.101388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.318 [2024-11-17 23:35:05.101405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:41.318 [2024-11-17 23:35:05.101420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:41.318 [2024-11-17 23:35:05.101442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.318 [2024-11-17 23:35:05.101604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.318 [2024-11-17 23:35:05.101623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:41.318 [2024-11-17 23:35:05.101643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:29:41.318 [2024-11-17 23:35:05.101656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.318 [2024-11-17 23:35:05.101862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.318 [2024-11-17 23:35:05.101923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:41.318 [2024-11-17 23:35:05.101948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:29:41.318 [2024-11-17 23:35:05.101964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.318 [2024-11-17 23:35:05.110797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.318 [2024-11-17 23:35:05.110844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:41.318 [2024-11-17 23:35:05.110854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.799 ms 00:29:41.318 [2024-11-17 23:35:05.110868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.318 [2024-11-17 23:35:05.111008] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:41.318 [2024-11-17 23:35:05.111022] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:41.318 [2024-11-17 23:35:05.111033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.318 [2024-11-17 23:35:05.111041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:41.318 [2024-11-17 23:35:05.111054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:29:41.318 [2024-11-17 23:35:05.111062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.318 [2024-11-17 23:35:05.123351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.319 [2024-11-17 23:35:05.123391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:41.319 [2024-11-17 23:35:05.123407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.270 ms 00:29:41.319 [2024-11-17 23:35:05.123415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.319 [2024-11-17 23:35:05.123554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.319 [2024-11-17 23:35:05.123564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:41.319 [2024-11-17 23:35:05.123573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:29:41.319 [2024-11-17 23:35:05.123581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.319 [2024-11-17 23:35:05.123635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.319 [2024-11-17 23:35:05.123645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:41.319 [2024-11-17 23:35:05.123656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:29:41.319 [2024-11-17 23:35:05.123665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.319 [2024-11-17 23:35:05.124018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.319 [2024-11-17 23:35:05.124038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:41.319 [2024-11-17 23:35:05.124050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:29:41.319 [2024-11-17 23:35:05.124062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.319 [2024-11-17 23:35:05.124085] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:29:41.319 [2024-11-17 23:35:05.124094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.319 [2024-11-17 23:35:05.124103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:41.319 [2024-11-17 23:35:05.124114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:41.319 [2024-11-17 23:35:05.124121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.319 [2024-11-17 23:35:05.133594] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:41.319 [2024-11-17 23:35:05.133920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.319 [2024-11-17 23:35:05.133938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:41.319 [2024-11-17 23:35:05.133949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.777 ms 00:29:41.319 [2024-11-17 23:35:05.133961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.581 [2024-11-17 23:35:05.136374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.581 [2024-11-17 23:35:05.136410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:41.581 [2024-11-17 23:35:05.136420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.386 ms 00:29:41.581 [2024-11-17 23:35:05.136430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.581 [2024-11-17 23:35:05.136529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.581 [2024-11-17 23:35:05.136553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:41.581 [2024-11-17 23:35:05.136563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:29:41.581 [2024-11-17 23:35:05.136571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.581 [2024-11-17 23:35:05.136603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.581 [2024-11-17 23:35:05.136612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:41.581 [2024-11-17 23:35:05.136626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:41.581 [2024-11-17 23:35:05.136633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.581 [2024-11-17 23:35:05.136675] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:41.581 [2024-11-17 23:35:05.136684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.581 [2024-11-17 23:35:05.136694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:41.581 [2024-11-17 23:35:05.136702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:29:41.581 [2024-11-17 23:35:05.136712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.581 [2024-11-17 23:35:05.143131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.581 [2024-11-17 23:35:05.143304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:41.581 [2024-11-17 23:35:05.143376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.400 ms 00:29:41.581 [2024-11-17 23:35:05.143400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.581 [2024-11-17 23:35:05.143530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.581 [2024-11-17 23:35:05.143565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:41.581 [2024-11-17 23:35:05.143586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:29:41.581 [2024-11-17 23:35:05.143608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.581 [2024-11-17 23:35:05.145137] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 81.282 ms, result 0 00:29:42.524  [2024-11-17T23:35:07.732Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-17T23:35:08.678Z] Copying: 20/1024 [MB] (10 MBps) [2024-11-17T23:35:09.620Z] Copying: 33/1024 [MB] (12 MBps) [2024-11-17T23:35:10.566Z] Copying: 45/1024 [MB] (11 MBps) [2024-11-17T23:35:11.513Z] Copying: 55/1024 [MB] (10 MBps) [2024-11-17T23:35:12.457Z] Copying: 67/1024 [MB] (11 MBps) [2024-11-17T23:35:13.397Z] Copying: 78/1024 [MB] (10 MBps) [2024-11-17T23:35:14.361Z] Copying: 89/1024 [MB] (10 MBps) [2024-11-17T23:35:15.746Z] Copying: 100/1024 [MB] (11 MBps) [2024-11-17T23:35:16.689Z] Copying: 110/1024 [MB] (10 MBps) [2024-11-17T23:35:17.635Z] Copying: 121/1024 [MB] (10 MBps) [2024-11-17T23:35:18.579Z] Copying: 140/1024 [MB] (18 MBps) [2024-11-17T23:35:19.523Z] Copying: 155/1024 [MB] (15 MBps) [2024-11-17T23:35:20.468Z] Copying: 165/1024 [MB] (10 MBps) [2024-11-17T23:35:21.419Z] Copying: 182/1024 [MB] (16 MBps) [2024-11-17T23:35:22.361Z] Copying: 199/1024 [MB] (17 MBps) [2024-11-17T23:35:23.815Z] Copying: 218/1024 [MB] (18 MBps) [2024-11-17T23:35:24.388Z] Copying: 237/1024 [MB] (19 MBps) [2024-11-17T23:35:25.772Z] Copying: 262/1024 [MB] (24 MBps) [2024-11-17T23:35:26.343Z] Copying: 301/1024 [MB] (39 MBps) [2024-11-17T23:35:27.730Z] Copying: 320/1024 [MB] (19 MBps) [2024-11-17T23:35:28.674Z] Copying: 334/1024 [MB] (13 MBps) [2024-11-17T23:35:29.615Z] Copying: 346/1024 [MB] (12 MBps) [2024-11-17T23:35:30.557Z] Copying: 369/1024 [MB] (22 MBps) [2024-11-17T23:35:31.499Z] Copying: 386/1024 [MB] (16 MBps) [2024-11-17T23:35:32.441Z] Copying: 409/1024 [MB] (22 MBps) [2024-11-17T23:35:33.384Z] Copying: 427/1024 [MB] (18 MBps) [2024-11-17T23:35:34.339Z] Copying: 443/1024 [MB] (16 MBps) [2024-11-17T23:35:35.726Z] Copying: 462/1024 [MB] (18 MBps) [2024-11-17T23:35:36.671Z] Copying: 483/1024 [MB] (21 MBps) [2024-11-17T23:35:37.612Z] Copying: 503/1024 [MB] (19 MBps) [2024-11-17T23:35:38.549Z] Copying: 522/1024 [MB] (19 MBps) [2024-11-17T23:35:39.492Z] Copying: 538/1024 [MB] (15 MBps) [2024-11-17T23:35:40.434Z] Copying: 560/1024 [MB] (22 MBps) [2024-11-17T23:35:41.382Z] Copying: 578/1024 [MB] (17 MBps) [2024-11-17T23:35:42.768Z] Copying: 594/1024 [MB] (16 MBps) [2024-11-17T23:35:43.346Z] Copying: 607/1024 [MB] (12 MBps) [2024-11-17T23:35:44.726Z] Copying: 618/1024 [MB] (11 MBps) [2024-11-17T23:35:45.412Z] Copying: 632/1024 [MB] (13 MBps) [2024-11-17T23:35:46.354Z] Copying: 643/1024 [MB] (11 MBps) [2024-11-17T23:35:47.734Z] Copying: 654/1024 [MB] (10 MBps) [2024-11-17T23:35:48.675Z] Copying: 665/1024 [MB] (11 MBps) [2024-11-17T23:35:49.615Z] Copying: 676/1024 [MB] (10 MBps) [2024-11-17T23:35:50.557Z] Copying: 687/1024 [MB] (10 MBps) [2024-11-17T23:35:51.506Z] Copying: 697/1024 [MB] (10 MBps) [2024-11-17T23:35:52.448Z] Copying: 711/1024 [MB] (13 MBps) [2024-11-17T23:35:53.388Z] Copying: 721/1024 [MB] (10 MBps) [2024-11-17T23:35:54.770Z] Copying: 732/1024 [MB] (11 MBps) [2024-11-17T23:35:55.342Z] Copying: 748/1024 [MB] (15 MBps) [2024-11-17T23:35:56.730Z] Copying: 765/1024 [MB] (17 MBps) [2024-11-17T23:35:57.671Z] Copying: 777/1024 [MB] (11 MBps) [2024-11-17T23:35:58.617Z] Copying: 805/1024 [MB] (28 MBps) [2024-11-17T23:35:59.560Z] Copying: 820/1024 [MB] (14 MBps) [2024-11-17T23:36:00.504Z] Copying: 833/1024 [MB] (13 MBps) [2024-11-17T23:36:01.446Z] Copying: 849/1024 [MB] (16 MBps) [2024-11-17T23:36:02.389Z] Copying: 873/1024 [MB] (24 MBps) [2024-11-17T23:36:03.333Z] Copying: 894/1024 [MB] (21 MBps) [2024-11-17T23:36:04.720Z] Copying: 913/1024 [MB] (19 MBps) [2024-11-17T23:36:05.660Z] Copying: 934/1024 [MB] (20 MBps) [2024-11-17T23:36:06.602Z] Copying: 961/1024 [MB] (26 MBps) [2024-11-17T23:36:07.546Z] Copying: 977/1024 [MB] (15 MBps) [2024-11-17T23:36:08.490Z] Copying: 998/1024 [MB] (21 MBps) [2024-11-17T23:36:09.063Z] Copying: 1014/1024 [MB] (15 MBps) [2024-11-17T23:36:09.063Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-17 23:36:08.907988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.242 [2024-11-17 23:36:08.908114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:45.243 [2024-11-17 23:36:08.908151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:45.243 [2024-11-17 23:36:08.908175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.243 [2024-11-17 23:36:08.908231] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:45.243 [2024-11-17 23:36:08.909453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.243 [2024-11-17 23:36:08.909501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:45.243 [2024-11-17 23:36:08.909516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.185 ms 00:30:45.243 [2024-11-17 23:36:08.909528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.243 [2024-11-17 23:36:08.909821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.243 [2024-11-17 23:36:08.909835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:45.243 [2024-11-17 23:36:08.909847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:30:45.243 [2024-11-17 23:36:08.909858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.243 [2024-11-17 23:36:08.909924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.243 [2024-11-17 23:36:08.909943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:45.243 [2024-11-17 23:36:08.909955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:45.243 [2024-11-17 23:36:08.909966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.243 [2024-11-17 23:36:08.910042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.243 [2024-11-17 23:36:08.910057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:45.243 [2024-11-17 23:36:08.910068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:30:45.243 [2024-11-17 23:36:08.910079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.243 [2024-11-17 23:36:08.910098] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:45.243 [2024-11-17 23:36:08.910115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:45.243 [2024-11-17 23:36:08.910761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:45.244 [2024-11-17 23:36:08.910771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:45.244 [2024-11-17 23:36:08.910780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:45.244 [2024-11-17 23:36:08.910790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:45.244 [2024-11-17 23:36:08.910800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:45.244 [2024-11-17 23:36:08.910810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:45.244 [2024-11-17 23:36:08.910820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:45.244 [2024-11-17 23:36:08.910829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:45.244 [2024-11-17 23:36:08.910839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:45.244 [2024-11-17 23:36:08.910848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:45.244 [2024-11-17 23:36:08.910857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:45.244 [2024-11-17 23:36:08.910866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:45.244 [2024-11-17 23:36:08.910895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:45.244 [2024-11-17 23:36:08.910906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:45.244 [2024-11-17 23:36:08.910916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:45.244 [2024-11-17 23:36:08.910926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:45.244 [2024-11-17 23:36:08.910936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:45.244 [2024-11-17 23:36:08.910946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:45.244 [2024-11-17 23:36:08.910956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:45.244 [2024-11-17 23:36:08.910967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:45.244 [2024-11-17 23:36:08.910977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:45.244 [2024-11-17 23:36:08.910986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:45.244 [2024-11-17 23:36:08.910996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:45.244 [2024-11-17 23:36:08.911008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:45.244 [2024-11-17 23:36:08.911018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:45.244 [2024-11-17 23:36:08.911028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:45.244 [2024-11-17 23:36:08.911048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:45.244 [2024-11-17 23:36:08.911059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:45.244 [2024-11-17 23:36:08.911068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:45.244 [2024-11-17 23:36:08.911078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:45.244 [2024-11-17 23:36:08.911088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:45.244 [2024-11-17 23:36:08.911097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:45.244 [2024-11-17 23:36:08.911107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:45.244 [2024-11-17 23:36:08.911116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:45.244 [2024-11-17 23:36:08.911127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:45.244 [2024-11-17 23:36:08.911137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:45.244 [2024-11-17 23:36:08.911147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:45.244 [2024-11-17 23:36:08.911166] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:45.244 [2024-11-17 23:36:08.911180] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 86b6b1dc-b32b-40df-8af2-a502fed8f36d 00:30:45.244 [2024-11-17 23:36:08.911190] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:45.244 [2024-11-17 23:36:08.911207] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:30:45.244 [2024-11-17 23:36:08.911218] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:45.244 [2024-11-17 23:36:08.911229] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:45.244 [2024-11-17 23:36:08.911239] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:45.244 [2024-11-17 23:36:08.911253] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:45.244 [2024-11-17 23:36:08.911263] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:45.244 [2024-11-17 23:36:08.911271] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:45.244 [2024-11-17 23:36:08.911279] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:45.244 [2024-11-17 23:36:08.911291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.244 [2024-11-17 23:36:08.911302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:45.244 [2024-11-17 23:36:08.911313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.195 ms 00:30:45.244 [2024-11-17 23:36:08.911322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.244 [2024-11-17 23:36:08.915070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.244 [2024-11-17 23:36:08.915263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:45.244 [2024-11-17 23:36:08.915471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.725 ms 00:30:45.244 [2024-11-17 23:36:08.915521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.244 [2024-11-17 23:36:08.915711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.244 [2024-11-17 23:36:08.915743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:45.244 [2024-11-17 23:36:08.915830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:30:45.244 [2024-11-17 23:36:08.915868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.244 [2024-11-17 23:36:08.927384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:45.244 [2024-11-17 23:36:08.927566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:45.244 [2024-11-17 23:36:08.927656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:45.244 [2024-11-17 23:36:08.927681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.244 [2024-11-17 23:36:08.927804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:45.244 [2024-11-17 23:36:08.927844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:45.244 [2024-11-17 23:36:08.927957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:45.244 [2024-11-17 23:36:08.928390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.244 [2024-11-17 23:36:08.928779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:45.244 [2024-11-17 23:36:08.929495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:45.244 [2024-11-17 23:36:08.929633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:45.244 [2024-11-17 23:36:08.929670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.244 [2024-11-17 23:36:08.929770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:45.244 [2024-11-17 23:36:08.929816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:45.244 [2024-11-17 23:36:08.929828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:45.244 [2024-11-17 23:36:08.929838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.244 [2024-11-17 23:36:08.950305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:45.244 [2024-11-17 23:36:08.950357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:45.244 [2024-11-17 23:36:08.950370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:45.244 [2024-11-17 23:36:08.950380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.244 [2024-11-17 23:36:08.961236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:45.244 [2024-11-17 23:36:08.961269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:45.244 [2024-11-17 23:36:08.961280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:45.244 [2024-11-17 23:36:08.961288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.244 [2024-11-17 23:36:08.961381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:45.244 [2024-11-17 23:36:08.961391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:45.244 [2024-11-17 23:36:08.961401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:45.244 [2024-11-17 23:36:08.961409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.244 [2024-11-17 23:36:08.961444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:45.245 [2024-11-17 23:36:08.961459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:45.245 [2024-11-17 23:36:08.961468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:45.245 [2024-11-17 23:36:08.961475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.245 [2024-11-17 23:36:08.961528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:45.245 [2024-11-17 23:36:08.961538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:45.245 [2024-11-17 23:36:08.961549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:45.245 [2024-11-17 23:36:08.961557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.245 [2024-11-17 23:36:08.961585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:45.245 [2024-11-17 23:36:08.961595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:45.245 [2024-11-17 23:36:08.961603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:45.245 [2024-11-17 23:36:08.961610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.245 [2024-11-17 23:36:08.961653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:45.245 [2024-11-17 23:36:08.961664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:45.245 [2024-11-17 23:36:08.961672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:45.245 [2024-11-17 23:36:08.961680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.245 [2024-11-17 23:36:08.961728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:45.245 [2024-11-17 23:36:08.961740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:45.245 [2024-11-17 23:36:08.961749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:45.245 [2024-11-17 23:36:08.961756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.245 [2024-11-17 23:36:08.961941] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 53.898 ms, result 0 00:30:45.513 00:30:45.513 00:30:45.513 23:36:09 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:48.060 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:48.060 23:36:11 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:30:48.060 [2024-11-17 23:36:11.360999] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:30:48.060 [2024-11-17 23:36:11.361089] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93993 ] 00:30:48.060 [2024-11-17 23:36:11.506413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:48.060 [2024-11-17 23:36:11.536413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:48.060 [2024-11-17 23:36:11.685939] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:48.060 [2024-11-17 23:36:11.686030] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:48.060 [2024-11-17 23:36:11.850019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.060 [2024-11-17 23:36:11.850100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:48.060 [2024-11-17 23:36:11.850118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:48.060 [2024-11-17 23:36:11.850128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.060 [2024-11-17 23:36:11.850194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.060 [2024-11-17 23:36:11.850207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:48.060 [2024-11-17 23:36:11.850216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:30:48.060 [2024-11-17 23:36:11.850226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.060 [2024-11-17 23:36:11.850251] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:48.060 [2024-11-17 23:36:11.850597] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:48.060 [2024-11-17 23:36:11.850632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.060 [2024-11-17 23:36:11.850644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:48.060 [2024-11-17 23:36:11.850654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.387 ms 00:30:48.060 [2024-11-17 23:36:11.850670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.060 [2024-11-17 23:36:11.851047] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:30:48.060 [2024-11-17 23:36:11.851090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.060 [2024-11-17 23:36:11.851107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:48.060 [2024-11-17 23:36:11.851119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:30:48.060 [2024-11-17 23:36:11.851127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.060 [2024-11-17 23:36:11.851194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.060 [2024-11-17 23:36:11.851207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:48.060 [2024-11-17 23:36:11.851221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:30:48.060 [2024-11-17 23:36:11.851233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.060 [2024-11-17 23:36:11.851496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.060 [2024-11-17 23:36:11.851510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:48.060 [2024-11-17 23:36:11.851520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:30:48.060 [2024-11-17 23:36:11.851528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.060 [2024-11-17 23:36:11.851624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.060 [2024-11-17 23:36:11.851649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:48.060 [2024-11-17 23:36:11.851662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:30:48.060 [2024-11-17 23:36:11.851670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.060 [2024-11-17 23:36:11.851696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.060 [2024-11-17 23:36:11.851712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:48.060 [2024-11-17 23:36:11.851722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:48.060 [2024-11-17 23:36:11.851730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.060 [2024-11-17 23:36:11.851754] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:48.060 [2024-11-17 23:36:11.854635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.060 [2024-11-17 23:36:11.854687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:48.060 [2024-11-17 23:36:11.854699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.882 ms 00:30:48.060 [2024-11-17 23:36:11.854713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.060 [2024-11-17 23:36:11.854754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.060 [2024-11-17 23:36:11.854764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:48.060 [2024-11-17 23:36:11.854775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:30:48.060 [2024-11-17 23:36:11.854784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.060 [2024-11-17 23:36:11.854840] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:48.060 [2024-11-17 23:36:11.854867] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:48.060 [2024-11-17 23:36:11.854935] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:48.060 [2024-11-17 23:36:11.854959] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:48.060 [2024-11-17 23:36:11.855074] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:48.060 [2024-11-17 23:36:11.855088] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:48.061 [2024-11-17 23:36:11.855100] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:48.061 [2024-11-17 23:36:11.855112] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:48.061 [2024-11-17 23:36:11.855126] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:48.061 [2024-11-17 23:36:11.855139] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:48.061 [2024-11-17 23:36:11.855146] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:48.061 [2024-11-17 23:36:11.855155] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:48.061 [2024-11-17 23:36:11.855163] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:48.061 [2024-11-17 23:36:11.855173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.061 [2024-11-17 23:36:11.855183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:48.061 [2024-11-17 23:36:11.855192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:30:48.061 [2024-11-17 23:36:11.855203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.061 [2024-11-17 23:36:11.855297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.061 [2024-11-17 23:36:11.855308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:48.061 [2024-11-17 23:36:11.855321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:30:48.061 [2024-11-17 23:36:11.855332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.061 [2024-11-17 23:36:11.855437] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:48.061 [2024-11-17 23:36:11.855460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:48.061 [2024-11-17 23:36:11.855474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:48.061 [2024-11-17 23:36:11.855484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:48.061 [2024-11-17 23:36:11.855494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:48.061 [2024-11-17 23:36:11.855501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:48.061 [2024-11-17 23:36:11.855510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:48.061 [2024-11-17 23:36:11.855519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:48.061 [2024-11-17 23:36:11.855527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:48.061 [2024-11-17 23:36:11.855534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:48.061 [2024-11-17 23:36:11.855541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:48.061 [2024-11-17 23:36:11.855550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:48.061 [2024-11-17 23:36:11.855558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:48.061 [2024-11-17 23:36:11.855565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:48.061 [2024-11-17 23:36:11.855573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:48.061 [2024-11-17 23:36:11.855581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:48.061 [2024-11-17 23:36:11.855587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:48.061 [2024-11-17 23:36:11.855595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:48.061 [2024-11-17 23:36:11.855605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:48.061 [2024-11-17 23:36:11.855612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:48.061 [2024-11-17 23:36:11.855619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:48.061 [2024-11-17 23:36:11.855627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:48.061 [2024-11-17 23:36:11.855634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:48.061 [2024-11-17 23:36:11.855641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:48.061 [2024-11-17 23:36:11.855648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:48.061 [2024-11-17 23:36:11.855655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:48.061 [2024-11-17 23:36:11.855662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:48.061 [2024-11-17 23:36:11.855671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:48.061 [2024-11-17 23:36:11.855677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:48.061 [2024-11-17 23:36:11.855684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:48.061 [2024-11-17 23:36:11.855692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:48.061 [2024-11-17 23:36:11.855699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:48.061 [2024-11-17 23:36:11.855705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:48.061 [2024-11-17 23:36:11.855713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:48.061 [2024-11-17 23:36:11.855726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:48.061 [2024-11-17 23:36:11.855732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:48.061 [2024-11-17 23:36:11.855739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:48.061 [2024-11-17 23:36:11.855745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:48.061 [2024-11-17 23:36:11.855752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:48.061 [2024-11-17 23:36:11.855758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:48.061 [2024-11-17 23:36:11.855765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:48.061 [2024-11-17 23:36:11.855774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:48.061 [2024-11-17 23:36:11.855780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:48.061 [2024-11-17 23:36:11.855791] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:48.061 [2024-11-17 23:36:11.855800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:48.061 [2024-11-17 23:36:11.855812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:48.061 [2024-11-17 23:36:11.855819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:48.061 [2024-11-17 23:36:11.855830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:48.061 [2024-11-17 23:36:11.855837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:48.061 [2024-11-17 23:36:11.855844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:48.061 [2024-11-17 23:36:11.855853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:48.061 [2024-11-17 23:36:11.855860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:48.061 [2024-11-17 23:36:11.855869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:48.061 [2024-11-17 23:36:11.855900] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:48.061 [2024-11-17 23:36:11.855911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:48.061 [2024-11-17 23:36:11.855920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:48.061 [2024-11-17 23:36:11.855927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:48.061 [2024-11-17 23:36:11.855935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:48.061 [2024-11-17 23:36:11.855942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:48.061 [2024-11-17 23:36:11.855950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:48.061 [2024-11-17 23:36:11.855958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:48.061 [2024-11-17 23:36:11.855965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:48.061 [2024-11-17 23:36:11.855973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:48.061 [2024-11-17 23:36:11.855981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:48.061 [2024-11-17 23:36:11.855989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:48.061 [2024-11-17 23:36:11.855997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:48.061 [2024-11-17 23:36:11.856006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:48.061 [2024-11-17 23:36:11.856014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:48.061 [2024-11-17 23:36:11.856027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:48.061 [2024-11-17 23:36:11.856039] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:48.061 [2024-11-17 23:36:11.856048] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:48.061 [2024-11-17 23:36:11.856057] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:48.061 [2024-11-17 23:36:11.856064] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:48.061 [2024-11-17 23:36:11.856073] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:48.061 [2024-11-17 23:36:11.856081] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:48.061 [2024-11-17 23:36:11.856093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.061 [2024-11-17 23:36:11.856102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:48.061 [2024-11-17 23:36:11.856111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.727 ms 00:30:48.061 [2024-11-17 23:36:11.856119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.061 [2024-11-17 23:36:11.870619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.061 [2024-11-17 23:36:11.870669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:48.061 [2024-11-17 23:36:11.870684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.455 ms 00:30:48.061 [2024-11-17 23:36:11.870693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.061 [2024-11-17 23:36:11.870784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.062 [2024-11-17 23:36:11.870794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:48.062 [2024-11-17 23:36:11.870808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:30:48.062 [2024-11-17 23:36:11.870818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.322 [2024-11-17 23:36:11.898504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.322 [2024-11-17 23:36:11.898584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:48.322 [2024-11-17 23:36:11.898605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.622 ms 00:30:48.322 [2024-11-17 23:36:11.898627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.322 [2024-11-17 23:36:11.898694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.322 [2024-11-17 23:36:11.898710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:48.322 [2024-11-17 23:36:11.898725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:48.322 [2024-11-17 23:36:11.898742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.322 [2024-11-17 23:36:11.898929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.322 [2024-11-17 23:36:11.899004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:48.322 [2024-11-17 23:36:11.899031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:30:48.322 [2024-11-17 23:36:11.899046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.322 [2024-11-17 23:36:11.899241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.322 [2024-11-17 23:36:11.899276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:48.322 [2024-11-17 23:36:11.899291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:30:48.322 [2024-11-17 23:36:11.899303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.322 [2024-11-17 23:36:11.911386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.322 [2024-11-17 23:36:11.911434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:48.322 [2024-11-17 23:36:11.911455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.053 ms 00:30:48.322 [2024-11-17 23:36:11.911468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.322 [2024-11-17 23:36:11.911619] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:48.322 [2024-11-17 23:36:11.911635] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:48.322 [2024-11-17 23:36:11.911646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.322 [2024-11-17 23:36:11.911655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:48.322 [2024-11-17 23:36:11.911665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:30:48.322 [2024-11-17 23:36:11.911673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.322 [2024-11-17 23:36:11.923998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.322 [2024-11-17 23:36:11.924049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:48.322 [2024-11-17 23:36:11.924062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.304 ms 00:30:48.322 [2024-11-17 23:36:11.924070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.322 [2024-11-17 23:36:11.924221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.322 [2024-11-17 23:36:11.924231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:48.322 [2024-11-17 23:36:11.924243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:30:48.322 [2024-11-17 23:36:11.924252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.322 [2024-11-17 23:36:11.924308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.322 [2024-11-17 23:36:11.924320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:48.322 [2024-11-17 23:36:11.924337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:48.322 [2024-11-17 23:36:11.924349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.322 [2024-11-17 23:36:11.924706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.322 [2024-11-17 23:36:11.924732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:48.322 [2024-11-17 23:36:11.924742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:30:48.322 [2024-11-17 23:36:11.924751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.322 [2024-11-17 23:36:11.924770] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:30:48.322 [2024-11-17 23:36:11.924780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.322 [2024-11-17 23:36:11.924788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:48.322 [2024-11-17 23:36:11.924799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:48.322 [2024-11-17 23:36:11.924808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.322 [2024-11-17 23:36:11.935604] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:48.322 [2024-11-17 23:36:11.935776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.322 [2024-11-17 23:36:11.935788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:48.322 [2024-11-17 23:36:11.935800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.945 ms 00:30:48.322 [2024-11-17 23:36:11.935816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.322 [2024-11-17 23:36:11.938406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.322 [2024-11-17 23:36:11.938444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:48.322 [2024-11-17 23:36:11.938459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.564 ms 00:30:48.322 [2024-11-17 23:36:11.938468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.322 [2024-11-17 23:36:11.938573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.322 [2024-11-17 23:36:11.938585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:48.322 [2024-11-17 23:36:11.938594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:30:48.322 [2024-11-17 23:36:11.938603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.322 [2024-11-17 23:36:11.938640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.322 [2024-11-17 23:36:11.938652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:48.322 [2024-11-17 23:36:11.938661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:48.322 [2024-11-17 23:36:11.938673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.322 [2024-11-17 23:36:11.938718] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:48.322 [2024-11-17 23:36:11.938729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.322 [2024-11-17 23:36:11.938739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:48.322 [2024-11-17 23:36:11.938748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:48.323 [2024-11-17 23:36:11.938755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.323 [2024-11-17 23:36:11.946265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.323 [2024-11-17 23:36:11.946326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:48.323 [2024-11-17 23:36:11.946338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.490 ms 00:30:48.323 [2024-11-17 23:36:11.946347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.323 [2024-11-17 23:36:11.946442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.323 [2024-11-17 23:36:11.946452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:48.323 [2024-11-17 23:36:11.946465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:30:48.323 [2024-11-17 23:36:11.946474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.323 [2024-11-17 23:36:11.948055] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 97.491 ms, result 0 00:30:49.266  [2024-11-17T23:36:14.031Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-17T23:36:14.972Z] Copying: 29/1024 [MB] (10 MBps) [2024-11-17T23:36:16.359Z] Copying: 50/1024 [MB] (21 MBps) [2024-11-17T23:36:17.346Z] Copying: 64/1024 [MB] (13 MBps) [2024-11-17T23:36:18.314Z] Copying: 82/1024 [MB] (18 MBps) [2024-11-17T23:36:19.258Z] Copying: 99/1024 [MB] (16 MBps) [2024-11-17T23:36:20.203Z] Copying: 111/1024 [MB] (12 MBps) [2024-11-17T23:36:21.142Z] Copying: 124/1024 [MB] (13 MBps) [2024-11-17T23:36:22.080Z] Copying: 139/1024 [MB] (15 MBps) [2024-11-17T23:36:23.018Z] Copying: 158/1024 [MB] (18 MBps) [2024-11-17T23:36:23.960Z] Copying: 169/1024 [MB] (10 MBps) [2024-11-17T23:36:25.333Z] Copying: 180/1024 [MB] (11 MBps) [2024-11-17T23:36:26.273Z] Copying: 192/1024 [MB] (11 MBps) [2024-11-17T23:36:27.207Z] Copying: 203/1024 [MB] (11 MBps) [2024-11-17T23:36:28.142Z] Copying: 214/1024 [MB] (10 MBps) [2024-11-17T23:36:29.079Z] Copying: 226/1024 [MB] (11 MBps) [2024-11-17T23:36:30.016Z] Copying: 237/1024 [MB] (11 MBps) [2024-11-17T23:36:31.389Z] Copying: 252/1024 [MB] (14 MBps) [2024-11-17T23:36:32.324Z] Copying: 263/1024 [MB] (11 MBps) [2024-11-17T23:36:33.257Z] Copying: 274/1024 [MB] (11 MBps) [2024-11-17T23:36:34.199Z] Copying: 286/1024 [MB] (11 MBps) [2024-11-17T23:36:35.135Z] Copying: 297/1024 [MB] (11 MBps) [2024-11-17T23:36:36.074Z] Copying: 309/1024 [MB] (11 MBps) [2024-11-17T23:36:37.029Z] Copying: 319/1024 [MB] (10 MBps) [2024-11-17T23:36:37.966Z] Copying: 330/1024 [MB] (11 MBps) [2024-11-17T23:36:39.352Z] Copying: 341/1024 [MB] (10 MBps) [2024-11-17T23:36:40.294Z] Copying: 352/1024 [MB] (11 MBps) [2024-11-17T23:36:41.229Z] Copying: 363/1024 [MB] (10 MBps) [2024-11-17T23:36:42.168Z] Copying: 374/1024 [MB] (11 MBps) [2024-11-17T23:36:43.102Z] Copying: 387/1024 [MB] (13 MBps) [2024-11-17T23:36:44.036Z] Copying: 399/1024 [MB] (11 MBps) [2024-11-17T23:36:44.990Z] Copying: 411/1024 [MB] (12 MBps) [2024-11-17T23:36:46.372Z] Copying: 423/1024 [MB] (11 MBps) [2024-11-17T23:36:47.306Z] Copying: 434/1024 [MB] (11 MBps) [2024-11-17T23:36:48.239Z] Copying: 446/1024 [MB] (11 MBps) [2024-11-17T23:36:49.244Z] Copying: 458/1024 [MB] (12 MBps) [2024-11-17T23:36:50.185Z] Copying: 470/1024 [MB] (12 MBps) [2024-11-17T23:36:51.126Z] Copying: 481/1024 [MB] (11 MBps) [2024-11-17T23:36:52.084Z] Copying: 492/1024 [MB] (10 MBps) [2024-11-17T23:36:53.027Z] Copying: 503/1024 [MB] (10 MBps) [2024-11-17T23:36:53.972Z] Copying: 514/1024 [MB] (10 MBps) [2024-11-17T23:36:55.363Z] Copying: 535/1024 [MB] (21 MBps) [2024-11-17T23:36:56.316Z] Copying: 546/1024 [MB] (10 MBps) [2024-11-17T23:36:57.259Z] Copying: 557/1024 [MB] (10 MBps) [2024-11-17T23:36:58.203Z] Copying: 569/1024 [MB] (12 MBps) [2024-11-17T23:36:59.154Z] Copying: 583/1024 [MB] (13 MBps) [2024-11-17T23:37:00.097Z] Copying: 593/1024 [MB] (10 MBps) [2024-11-17T23:37:01.039Z] Copying: 606/1024 [MB] (12 MBps) [2024-11-17T23:37:01.985Z] Copying: 623/1024 [MB] (17 MBps) [2024-11-17T23:37:03.371Z] Copying: 641/1024 [MB] (17 MBps) [2024-11-17T23:37:04.313Z] Copying: 659/1024 [MB] (17 MBps) [2024-11-17T23:37:05.259Z] Copying: 673/1024 [MB] (14 MBps) [2024-11-17T23:37:06.205Z] Copying: 685/1024 [MB] (11 MBps) [2024-11-17T23:37:07.149Z] Copying: 697/1024 [MB] (12 MBps) [2024-11-17T23:37:08.110Z] Copying: 721/1024 [MB] (23 MBps) [2024-11-17T23:37:09.054Z] Copying: 738/1024 [MB] (16 MBps) [2024-11-17T23:37:09.999Z] Copying: 753/1024 [MB] (15 MBps) [2024-11-17T23:37:11.392Z] Copying: 768/1024 [MB] (14 MBps) [2024-11-17T23:37:11.964Z] Copying: 782/1024 [MB] (14 MBps) [2024-11-17T23:37:13.350Z] Copying: 794/1024 [MB] (12 MBps) [2024-11-17T23:37:14.295Z] Copying: 809/1024 [MB] (14 MBps) [2024-11-17T23:37:15.240Z] Copying: 828/1024 [MB] (19 MBps) [2024-11-17T23:37:16.183Z] Copying: 843/1024 [MB] (14 MBps) [2024-11-17T23:37:17.140Z] Copying: 857/1024 [MB] (14 MBps) [2024-11-17T23:37:18.085Z] Copying: 875/1024 [MB] (17 MBps) [2024-11-17T23:37:19.036Z] Copying: 892/1024 [MB] (17 MBps) [2024-11-17T23:37:20.061Z] Copying: 915/1024 [MB] (22 MBps) [2024-11-17T23:37:21.005Z] Copying: 940/1024 [MB] (25 MBps) [2024-11-17T23:37:22.392Z] Copying: 951/1024 [MB] (10 MBps) [2024-11-17T23:37:22.965Z] Copying: 961/1024 [MB] (10 MBps) [2024-11-17T23:37:24.352Z] Copying: 972/1024 [MB] (10 MBps) [2024-11-17T23:37:25.296Z] Copying: 987/1024 [MB] (14 MBps) [2024-11-17T23:37:26.244Z] Copying: 997/1024 [MB] (10 MBps) [2024-11-17T23:37:27.188Z] Copying: 1007/1024 [MB] (10 MBps) [2024-11-17T23:37:28.140Z] Copying: 1018/1024 [MB] (10 MBps) [2024-11-17T23:37:28.140Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-11-17 23:37:27.900024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.319 [2024-11-17 23:37:27.900073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:04.319 [2024-11-17 23:37:27.900086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:04.319 [2024-11-17 23:37:27.900094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.319 [2024-11-17 23:37:27.902979] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:04.319 [2024-11-17 23:37:27.904410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.319 [2024-11-17 23:37:27.904447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:04.319 [2024-11-17 23:37:27.904457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.397 ms 00:32:04.319 [2024-11-17 23:37:27.904463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.319 [2024-11-17 23:37:27.912583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.319 [2024-11-17 23:37:27.912618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:04.319 [2024-11-17 23:37:27.912626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.450 ms 00:32:04.319 [2024-11-17 23:37:27.912632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.319 [2024-11-17 23:37:27.912654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.319 [2024-11-17 23:37:27.912662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:04.319 [2024-11-17 23:37:27.912669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:04.319 [2024-11-17 23:37:27.912674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.319 [2024-11-17 23:37:27.912722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.319 [2024-11-17 23:37:27.912729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:04.319 [2024-11-17 23:37:27.912738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:32:04.319 [2024-11-17 23:37:27.912743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.319 [2024-11-17 23:37:27.912753] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:04.319 [2024-11-17 23:37:27.912762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129536 / 261120 wr_cnt: 1 state: open 00:32:04.319 [2024-11-17 23:37:27.912770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:04.319 [2024-11-17 23:37:27.912776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:04.319 [2024-11-17 23:37:27.912781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:04.319 [2024-11-17 23:37:27.912787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:04.319 [2024-11-17 23:37:27.912793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:04.319 [2024-11-17 23:37:27.912798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:04.319 [2024-11-17 23:37:27.912804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:04.319 [2024-11-17 23:37:27.912810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:04.319 [2024-11-17 23:37:27.912815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:04.319 [2024-11-17 23:37:27.912821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:04.319 [2024-11-17 23:37:27.912826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:04.319 [2024-11-17 23:37:27.912832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:04.319 [2024-11-17 23:37:27.912838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:04.319 [2024-11-17 23:37:27.912844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:04.319 [2024-11-17 23:37:27.912849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:04.319 [2024-11-17 23:37:27.912855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:04.319 [2024-11-17 23:37:27.912861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:04.319 [2024-11-17 23:37:27.912867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:04.319 [2024-11-17 23:37:27.912873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:04.319 [2024-11-17 23:37:27.912889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:04.319 [2024-11-17 23:37:27.912895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:04.319 [2024-11-17 23:37:27.912901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:04.319 [2024-11-17 23:37:27.912907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:04.319 [2024-11-17 23:37:27.912913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:04.319 [2024-11-17 23:37:27.912919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:04.319 [2024-11-17 23:37:27.912925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:04.319 [2024-11-17 23:37:27.912930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:04.319 [2024-11-17 23:37:27.912936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:04.319 [2024-11-17 23:37:27.912942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:04.319 [2024-11-17 23:37:27.912948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:04.319 [2024-11-17 23:37:27.912954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:04.319 [2024-11-17 23:37:27.912960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.912966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.912972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.912978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.912983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.912989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.912995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:04.320 [2024-11-17 23:37:27.913367] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:04.320 [2024-11-17 23:37:27.913375] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 86b6b1dc-b32b-40df-8af2-a502fed8f36d 00:32:04.320 [2024-11-17 23:37:27.913384] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129536 00:32:04.320 [2024-11-17 23:37:27.913389] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 129568 00:32:04.320 [2024-11-17 23:37:27.913394] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129536 00:32:04.320 [2024-11-17 23:37:27.913400] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:32:04.320 [2024-11-17 23:37:27.913406] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:04.320 [2024-11-17 23:37:27.913413] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:04.320 [2024-11-17 23:37:27.913419] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:04.320 [2024-11-17 23:37:27.913424] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:04.320 [2024-11-17 23:37:27.913429] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:04.320 [2024-11-17 23:37:27.913434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.320 [2024-11-17 23:37:27.913441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:04.320 [2024-11-17 23:37:27.913447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.681 ms 00:32:04.320 [2024-11-17 23:37:27.913452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.320 [2024-11-17 23:37:27.914722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.320 [2024-11-17 23:37:27.914748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:04.320 [2024-11-17 23:37:27.914756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.259 ms 00:32:04.320 [2024-11-17 23:37:27.914765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.320 [2024-11-17 23:37:27.914839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.320 [2024-11-17 23:37:27.914847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:04.320 [2024-11-17 23:37:27.914853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:32:04.320 [2024-11-17 23:37:27.914858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.320 [2024-11-17 23:37:27.918972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.320 [2024-11-17 23:37:27.918995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:04.320 [2024-11-17 23:37:27.919007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.320 [2024-11-17 23:37:27.919013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.321 [2024-11-17 23:37:27.919054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.321 [2024-11-17 23:37:27.919063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:04.321 [2024-11-17 23:37:27.919069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.321 [2024-11-17 23:37:27.919077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.321 [2024-11-17 23:37:27.919103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.321 [2024-11-17 23:37:27.919110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:04.321 [2024-11-17 23:37:27.919116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.321 [2024-11-17 23:37:27.919123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.321 [2024-11-17 23:37:27.919134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.321 [2024-11-17 23:37:27.919142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:04.321 [2024-11-17 23:37:27.919149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.321 [2024-11-17 23:37:27.919154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.321 [2024-11-17 23:37:27.926506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.321 [2024-11-17 23:37:27.926537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:04.321 [2024-11-17 23:37:27.926549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.321 [2024-11-17 23:37:27.926555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.321 [2024-11-17 23:37:27.933022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.321 [2024-11-17 23:37:27.933055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:04.321 [2024-11-17 23:37:27.933062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.321 [2024-11-17 23:37:27.933073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.321 [2024-11-17 23:37:27.933105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.321 [2024-11-17 23:37:27.933115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:04.321 [2024-11-17 23:37:27.933121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.321 [2024-11-17 23:37:27.933127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.321 [2024-11-17 23:37:27.933148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.321 [2024-11-17 23:37:27.933154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:04.321 [2024-11-17 23:37:27.933160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.321 [2024-11-17 23:37:27.933168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.321 [2024-11-17 23:37:27.933205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.321 [2024-11-17 23:37:27.933212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:04.321 [2024-11-17 23:37:27.933221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.321 [2024-11-17 23:37:27.933227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.321 [2024-11-17 23:37:27.933251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.321 [2024-11-17 23:37:27.933259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:04.321 [2024-11-17 23:37:27.933265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.321 [2024-11-17 23:37:27.933270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.321 [2024-11-17 23:37:27.933297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.321 [2024-11-17 23:37:27.933307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:04.321 [2024-11-17 23:37:27.933313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.321 [2024-11-17 23:37:27.933319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.321 [2024-11-17 23:37:27.933354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.321 [2024-11-17 23:37:27.933368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:04.321 [2024-11-17 23:37:27.933374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.321 [2024-11-17 23:37:27.933380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.321 [2024-11-17 23:37:27.933469] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 34.693 ms, result 0 00:32:04.894 00:32:04.894 00:32:04.894 23:37:28 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:32:05.155 [2024-11-17 23:37:28.732443] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:32:05.155 [2024-11-17 23:37:28.732569] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94763 ] 00:32:05.155 [2024-11-17 23:37:28.874333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:05.155 [2024-11-17 23:37:28.895197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:05.418 [2024-11-17 23:37:28.978805] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:05.418 [2024-11-17 23:37:28.978856] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:05.418 [2024-11-17 23:37:29.125104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.418 [2024-11-17 23:37:29.125141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:05.418 [2024-11-17 23:37:29.125150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:05.418 [2024-11-17 23:37:29.125157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.418 [2024-11-17 23:37:29.125193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.418 [2024-11-17 23:37:29.125204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:05.418 [2024-11-17 23:37:29.125210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:32:05.418 [2024-11-17 23:37:29.125218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.418 [2024-11-17 23:37:29.125233] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:05.418 [2024-11-17 23:37:29.125447] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:05.418 [2024-11-17 23:37:29.125464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.418 [2024-11-17 23:37:29.125469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:05.418 [2024-11-17 23:37:29.125476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:32:05.418 [2024-11-17 23:37:29.125483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.418 [2024-11-17 23:37:29.125698] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:05.418 [2024-11-17 23:37:29.125723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.418 [2024-11-17 23:37:29.125729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:05.418 [2024-11-17 23:37:29.125736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:32:05.418 [2024-11-17 23:37:29.125741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.418 [2024-11-17 23:37:29.125775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.418 [2024-11-17 23:37:29.125785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:05.418 [2024-11-17 23:37:29.125792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:32:05.418 [2024-11-17 23:37:29.125797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.418 [2024-11-17 23:37:29.125984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.418 [2024-11-17 23:37:29.126000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:05.418 [2024-11-17 23:37:29.126009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:32:05.418 [2024-11-17 23:37:29.126014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.418 [2024-11-17 23:37:29.126073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.418 [2024-11-17 23:37:29.126080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:05.418 [2024-11-17 23:37:29.126086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:32:05.418 [2024-11-17 23:37:29.126091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.418 [2024-11-17 23:37:29.126105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.418 [2024-11-17 23:37:29.126114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:05.418 [2024-11-17 23:37:29.126125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:05.418 [2024-11-17 23:37:29.126130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.418 [2024-11-17 23:37:29.126143] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:05.418 [2024-11-17 23:37:29.127368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.418 [2024-11-17 23:37:29.127389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:05.418 [2024-11-17 23:37:29.127396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.228 ms 00:32:05.418 [2024-11-17 23:37:29.127406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.418 [2024-11-17 23:37:29.127433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.418 [2024-11-17 23:37:29.127440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:05.418 [2024-11-17 23:37:29.127446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:05.418 [2024-11-17 23:37:29.127452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.418 [2024-11-17 23:37:29.127465] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:05.418 [2024-11-17 23:37:29.127478] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:05.418 [2024-11-17 23:37:29.127510] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:05.418 [2024-11-17 23:37:29.127524] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:05.418 [2024-11-17 23:37:29.127601] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:05.418 [2024-11-17 23:37:29.127608] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:05.418 [2024-11-17 23:37:29.127615] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:05.418 [2024-11-17 23:37:29.127625] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:05.418 [2024-11-17 23:37:29.127632] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:05.418 [2024-11-17 23:37:29.127639] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:05.418 [2024-11-17 23:37:29.127645] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:05.418 [2024-11-17 23:37:29.127650] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:05.418 [2024-11-17 23:37:29.127659] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:05.419 [2024-11-17 23:37:29.127665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.419 [2024-11-17 23:37:29.127670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:05.419 [2024-11-17 23:37:29.127678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:32:05.419 [2024-11-17 23:37:29.127684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.419 [2024-11-17 23:37:29.127746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.419 [2024-11-17 23:37:29.127752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:05.419 [2024-11-17 23:37:29.127758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:32:05.419 [2024-11-17 23:37:29.127765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.419 [2024-11-17 23:37:29.127836] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:05.419 [2024-11-17 23:37:29.127844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:05.419 [2024-11-17 23:37:29.127854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:05.419 [2024-11-17 23:37:29.127860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:05.419 [2024-11-17 23:37:29.127868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:05.419 [2024-11-17 23:37:29.127873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:05.419 [2024-11-17 23:37:29.127890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:05.419 [2024-11-17 23:37:29.127895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:05.419 [2024-11-17 23:37:29.127900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:05.419 [2024-11-17 23:37:29.127906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:05.419 [2024-11-17 23:37:29.127911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:05.419 [2024-11-17 23:37:29.127916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:05.419 [2024-11-17 23:37:29.127921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:05.419 [2024-11-17 23:37:29.127926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:05.419 [2024-11-17 23:37:29.127931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:05.419 [2024-11-17 23:37:29.127936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:05.419 [2024-11-17 23:37:29.127941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:05.419 [2024-11-17 23:37:29.127946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:05.419 [2024-11-17 23:37:29.127951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:05.419 [2024-11-17 23:37:29.127956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:05.419 [2024-11-17 23:37:29.127961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:05.419 [2024-11-17 23:37:29.127966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:05.419 [2024-11-17 23:37:29.127972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:05.419 [2024-11-17 23:37:29.127977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:05.419 [2024-11-17 23:37:29.127982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:05.419 [2024-11-17 23:37:29.127987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:05.419 [2024-11-17 23:37:29.127991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:05.419 [2024-11-17 23:37:29.127996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:05.419 [2024-11-17 23:37:29.128001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:05.419 [2024-11-17 23:37:29.128006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:05.419 [2024-11-17 23:37:29.128011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:05.419 [2024-11-17 23:37:29.128016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:05.419 [2024-11-17 23:37:29.128024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:05.419 [2024-11-17 23:37:29.128029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:05.419 [2024-11-17 23:37:29.128035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:05.419 [2024-11-17 23:37:29.128040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:05.419 [2024-11-17 23:37:29.128046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:05.419 [2024-11-17 23:37:29.128051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:05.419 [2024-11-17 23:37:29.128058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:05.419 [2024-11-17 23:37:29.128064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:05.419 [2024-11-17 23:37:29.128070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:05.419 [2024-11-17 23:37:29.128075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:05.419 [2024-11-17 23:37:29.128081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:05.419 [2024-11-17 23:37:29.128086] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:05.419 [2024-11-17 23:37:29.128093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:05.419 [2024-11-17 23:37:29.128101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:05.419 [2024-11-17 23:37:29.128107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:05.419 [2024-11-17 23:37:29.128115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:05.419 [2024-11-17 23:37:29.128121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:05.419 [2024-11-17 23:37:29.128126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:05.419 [2024-11-17 23:37:29.128132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:05.419 [2024-11-17 23:37:29.128138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:05.419 [2024-11-17 23:37:29.128143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:05.419 [2024-11-17 23:37:29.128150] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:05.419 [2024-11-17 23:37:29.128159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:05.419 [2024-11-17 23:37:29.128169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:05.419 [2024-11-17 23:37:29.128175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:05.419 [2024-11-17 23:37:29.128181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:05.419 [2024-11-17 23:37:29.128187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:05.419 [2024-11-17 23:37:29.128193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:05.419 [2024-11-17 23:37:29.128199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:05.419 [2024-11-17 23:37:29.128205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:05.419 [2024-11-17 23:37:29.128212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:05.419 [2024-11-17 23:37:29.128218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:05.419 [2024-11-17 23:37:29.128225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:05.419 [2024-11-17 23:37:29.128232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:05.419 [2024-11-17 23:37:29.128238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:05.419 [2024-11-17 23:37:29.128244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:05.419 [2024-11-17 23:37:29.128254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:05.419 [2024-11-17 23:37:29.128260] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:05.419 [2024-11-17 23:37:29.128269] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:05.419 [2024-11-17 23:37:29.128275] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:05.420 [2024-11-17 23:37:29.128281] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:05.420 [2024-11-17 23:37:29.128287] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:05.420 [2024-11-17 23:37:29.128293] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:05.420 [2024-11-17 23:37:29.128299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.420 [2024-11-17 23:37:29.128306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:05.420 [2024-11-17 23:37:29.128312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:32:05.420 [2024-11-17 23:37:29.128318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.420 [2024-11-17 23:37:29.133660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.420 [2024-11-17 23:37:29.133681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:05.420 [2024-11-17 23:37:29.133690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.310 ms 00:32:05.420 [2024-11-17 23:37:29.133696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.420 [2024-11-17 23:37:29.133756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.420 [2024-11-17 23:37:29.133762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:05.420 [2024-11-17 23:37:29.133768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:32:05.420 [2024-11-17 23:37:29.133773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.420 [2024-11-17 23:37:29.147744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.420 [2024-11-17 23:37:29.147778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:05.420 [2024-11-17 23:37:29.147788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.934 ms 00:32:05.420 [2024-11-17 23:37:29.147797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.420 [2024-11-17 23:37:29.147820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.420 [2024-11-17 23:37:29.147827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:05.420 [2024-11-17 23:37:29.147833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:32:05.420 [2024-11-17 23:37:29.147843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.420 [2024-11-17 23:37:29.147930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.420 [2024-11-17 23:37:29.147942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:05.420 [2024-11-17 23:37:29.147951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:32:05.420 [2024-11-17 23:37:29.147957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.420 [2024-11-17 23:37:29.148042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.420 [2024-11-17 23:37:29.148049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:05.420 [2024-11-17 23:37:29.148055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:32:05.420 [2024-11-17 23:37:29.148060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.420 [2024-11-17 23:37:29.152597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.420 [2024-11-17 23:37:29.152638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:05.420 [2024-11-17 23:37:29.152651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.521 ms 00:32:05.420 [2024-11-17 23:37:29.152661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.420 [2024-11-17 23:37:29.152775] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:32:05.420 [2024-11-17 23:37:29.152793] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:05.420 [2024-11-17 23:37:29.152807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.420 [2024-11-17 23:37:29.152818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:05.420 [2024-11-17 23:37:29.152827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:32:05.420 [2024-11-17 23:37:29.152835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.420 [2024-11-17 23:37:29.166061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.420 [2024-11-17 23:37:29.166085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:05.420 [2024-11-17 23:37:29.166093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.209 ms 00:32:05.420 [2024-11-17 23:37:29.166099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.420 [2024-11-17 23:37:29.166187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.420 [2024-11-17 23:37:29.166193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:05.420 [2024-11-17 23:37:29.166203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:32:05.420 [2024-11-17 23:37:29.166208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.420 [2024-11-17 23:37:29.166242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.420 [2024-11-17 23:37:29.166252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:05.420 [2024-11-17 23:37:29.166260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:32:05.420 [2024-11-17 23:37:29.166265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.420 [2024-11-17 23:37:29.166480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.420 [2024-11-17 23:37:29.166495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:05.420 [2024-11-17 23:37:29.166501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:32:05.420 [2024-11-17 23:37:29.166506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.420 [2024-11-17 23:37:29.166517] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:05.420 [2024-11-17 23:37:29.166527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.420 [2024-11-17 23:37:29.166532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:05.420 [2024-11-17 23:37:29.166541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:05.420 [2024-11-17 23:37:29.166547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.420 [2024-11-17 23:37:29.172826] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:05.420 [2024-11-17 23:37:29.172927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.420 [2024-11-17 23:37:29.172935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:05.420 [2024-11-17 23:37:29.172941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.369 ms 00:32:05.420 [2024-11-17 23:37:29.172946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.420 [2024-11-17 23:37:29.174725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.420 [2024-11-17 23:37:29.174746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:05.420 [2024-11-17 23:37:29.174753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.765 ms 00:32:05.420 [2024-11-17 23:37:29.174759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.420 [2024-11-17 23:37:29.174799] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:32:05.420 [2024-11-17 23:37:29.175226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.420 [2024-11-17 23:37:29.175235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:05.420 [2024-11-17 23:37:29.175244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.440 ms 00:32:05.420 [2024-11-17 23:37:29.175249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.420 [2024-11-17 23:37:29.175273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.420 [2024-11-17 23:37:29.175280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:05.420 [2024-11-17 23:37:29.175288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:05.420 [2024-11-17 23:37:29.175293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.420 [2024-11-17 23:37:29.175317] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:05.420 [2024-11-17 23:37:29.175324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.420 [2024-11-17 23:37:29.175330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:05.420 [2024-11-17 23:37:29.175337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:05.420 [2024-11-17 23:37:29.175342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.420 [2024-11-17 23:37:29.178565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.420 [2024-11-17 23:37:29.178595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:05.420 [2024-11-17 23:37:29.178602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.209 ms 00:32:05.420 [2024-11-17 23:37:29.178608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.420 [2024-11-17 23:37:29.178659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.420 [2024-11-17 23:37:29.178666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:05.420 [2024-11-17 23:37:29.178674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:32:05.420 [2024-11-17 23:37:29.178679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.420 [2024-11-17 23:37:29.179406] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 54.017 ms, result 0 00:32:06.805  [2024-11-17T23:37:31.567Z] Copying: 21/1024 [MB] (21 MBps) [2024-11-17T23:37:32.510Z] Copying: 43/1024 [MB] (22 MBps) [2024-11-17T23:37:33.453Z] Copying: 68/1024 [MB] (24 MBps) [2024-11-17T23:37:34.397Z] Copying: 92/1024 [MB] (24 MBps) [2024-11-17T23:37:35.342Z] Copying: 107/1024 [MB] (15 MBps) [2024-11-17T23:37:36.728Z] Copying: 128/1024 [MB] (21 MBps) [2024-11-17T23:37:37.673Z] Copying: 153/1024 [MB] (24 MBps) [2024-11-17T23:37:38.616Z] Copying: 170/1024 [MB] (17 MBps) [2024-11-17T23:37:39.560Z] Copying: 186/1024 [MB] (16 MBps) [2024-11-17T23:37:40.506Z] Copying: 197/1024 [MB] (10 MBps) [2024-11-17T23:37:41.450Z] Copying: 208/1024 [MB] (10 MBps) [2024-11-17T23:37:42.394Z] Copying: 218/1024 [MB] (10 MBps) [2024-11-17T23:37:43.338Z] Copying: 231/1024 [MB] (12 MBps) [2024-11-17T23:37:44.724Z] Copying: 242/1024 [MB] (10 MBps) [2024-11-17T23:37:45.669Z] Copying: 252/1024 [MB] (10 MBps) [2024-11-17T23:37:46.611Z] Copying: 263/1024 [MB] (10 MBps) [2024-11-17T23:37:47.556Z] Copying: 275/1024 [MB] (12 MBps) [2024-11-17T23:37:48.498Z] Copying: 286/1024 [MB] (11 MBps) [2024-11-17T23:37:49.442Z] Copying: 299/1024 [MB] (12 MBps) [2024-11-17T23:37:50.386Z] Copying: 311/1024 [MB] (11 MBps) [2024-11-17T23:37:51.394Z] Copying: 323/1024 [MB] (12 MBps) [2024-11-17T23:37:52.388Z] Copying: 334/1024 [MB] (10 MBps) [2024-11-17T23:37:53.332Z] Copying: 345/1024 [MB] (11 MBps) [2024-11-17T23:37:54.718Z] Copying: 356/1024 [MB] (11 MBps) [2024-11-17T23:37:55.660Z] Copying: 382/1024 [MB] (25 MBps) [2024-11-17T23:37:56.606Z] Copying: 396/1024 [MB] (13 MBps) [2024-11-17T23:37:57.549Z] Copying: 414/1024 [MB] (18 MBps) [2024-11-17T23:37:58.494Z] Copying: 430/1024 [MB] (16 MBps) [2024-11-17T23:37:59.446Z] Copying: 441/1024 [MB] (11 MBps) [2024-11-17T23:38:00.399Z] Copying: 463/1024 [MB] (21 MBps) [2024-11-17T23:38:01.346Z] Copying: 485/1024 [MB] (21 MBps) [2024-11-17T23:38:02.732Z] Copying: 499/1024 [MB] (14 MBps) [2024-11-17T23:38:03.674Z] Copying: 515/1024 [MB] (15 MBps) [2024-11-17T23:38:04.618Z] Copying: 525/1024 [MB] (10 MBps) [2024-11-17T23:38:05.562Z] Copying: 536/1024 [MB] (10 MBps) [2024-11-17T23:38:06.508Z] Copying: 547/1024 [MB] (10 MBps) [2024-11-17T23:38:07.452Z] Copying: 562/1024 [MB] (15 MBps) [2024-11-17T23:38:08.394Z] Copying: 580/1024 [MB] (17 MBps) [2024-11-17T23:38:09.345Z] Copying: 596/1024 [MB] (15 MBps) [2024-11-17T23:38:10.729Z] Copying: 617/1024 [MB] (21 MBps) [2024-11-17T23:38:11.672Z] Copying: 629/1024 [MB] (12 MBps) [2024-11-17T23:38:12.621Z] Copying: 644/1024 [MB] (14 MBps) [2024-11-17T23:38:13.566Z] Copying: 662/1024 [MB] (18 MBps) [2024-11-17T23:38:14.507Z] Copying: 680/1024 [MB] (17 MBps) [2024-11-17T23:38:15.448Z] Copying: 698/1024 [MB] (18 MBps) [2024-11-17T23:38:16.391Z] Copying: 712/1024 [MB] (14 MBps) [2024-11-17T23:38:17.334Z] Copying: 726/1024 [MB] (13 MBps) [2024-11-17T23:38:18.741Z] Copying: 737/1024 [MB] (10 MBps) [2024-11-17T23:38:19.693Z] Copying: 759/1024 [MB] (22 MBps) [2024-11-17T23:38:20.636Z] Copying: 772/1024 [MB] (13 MBps) [2024-11-17T23:38:21.596Z] Copying: 788/1024 [MB] (15 MBps) [2024-11-17T23:38:22.539Z] Copying: 806/1024 [MB] (17 MBps) [2024-11-17T23:38:23.571Z] Copying: 823/1024 [MB] (16 MBps) [2024-11-17T23:38:24.516Z] Copying: 838/1024 [MB] (15 MBps) [2024-11-17T23:38:25.460Z] Copying: 868/1024 [MB] (29 MBps) [2024-11-17T23:38:26.402Z] Copying: 889/1024 [MB] (20 MBps) [2024-11-17T23:38:27.346Z] Copying: 905/1024 [MB] (16 MBps) [2024-11-17T23:38:28.733Z] Copying: 921/1024 [MB] (15 MBps) [2024-11-17T23:38:29.680Z] Copying: 937/1024 [MB] (16 MBps) [2024-11-17T23:38:30.622Z] Copying: 951/1024 [MB] (14 MBps) [2024-11-17T23:38:31.579Z] Copying: 969/1024 [MB] (17 MBps) [2024-11-17T23:38:32.525Z] Copying: 980/1024 [MB] (11 MBps) [2024-11-17T23:38:33.475Z] Copying: 993/1024 [MB] (13 MBps) [2024-11-17T23:38:34.417Z] Copying: 1005/1024 [MB] (11 MBps) [2024-11-17T23:38:34.679Z] Copying: 1018/1024 [MB] (13 MBps) [2024-11-17T23:38:35.252Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-17 23:38:34.939131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.431 [2024-11-17 23:38:34.939236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:11.431 [2024-11-17 23:38:34.939259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:11.431 [2024-11-17 23:38:34.939272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.431 [2024-11-17 23:38:34.939314] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:11.431 [2024-11-17 23:38:34.940120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.432 [2024-11-17 23:38:34.940171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:11.432 [2024-11-17 23:38:34.940192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.777 ms 00:33:11.432 [2024-11-17 23:38:34.940205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.432 [2024-11-17 23:38:34.940554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.432 [2024-11-17 23:38:34.940571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:11.432 [2024-11-17 23:38:34.940582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:33:11.432 [2024-11-17 23:38:34.940590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.432 [2024-11-17 23:38:34.940626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.432 [2024-11-17 23:38:34.940637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:11.432 [2024-11-17 23:38:34.940654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:11.432 [2024-11-17 23:38:34.940664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.432 [2024-11-17 23:38:34.941324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.432 [2024-11-17 23:38:34.941337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:11.432 [2024-11-17 23:38:34.941350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:33:11.432 [2024-11-17 23:38:34.941360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.432 [2024-11-17 23:38:34.941376] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:11.432 [2024-11-17 23:38:34.941392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:33:11.432 [2024-11-17 23:38:34.941403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.941997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.942006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.942016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.942025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.942034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.942042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.942052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.942061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.942072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.942081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.942090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.942100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.942109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:11.432 [2024-11-17 23:38:34.942118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:11.433 [2024-11-17 23:38:34.942127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:11.433 [2024-11-17 23:38:34.942147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:11.433 [2024-11-17 23:38:34.942156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:11.433 [2024-11-17 23:38:34.942164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:11.433 [2024-11-17 23:38:34.942173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:11.433 [2024-11-17 23:38:34.942182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:11.433 [2024-11-17 23:38:34.942190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:11.433 [2024-11-17 23:38:34.942198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:11.433 [2024-11-17 23:38:34.942211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:11.433 [2024-11-17 23:38:34.942219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:11.433 [2024-11-17 23:38:34.942228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:11.433 [2024-11-17 23:38:34.942237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:11.433 [2024-11-17 23:38:34.942245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:11.433 [2024-11-17 23:38:34.942254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:11.433 [2024-11-17 23:38:34.942264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:11.433 [2024-11-17 23:38:34.942272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:11.433 [2024-11-17 23:38:34.942281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:11.433 [2024-11-17 23:38:34.942291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:11.433 [2024-11-17 23:38:34.942300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:11.433 [2024-11-17 23:38:34.942309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:11.433 [2024-11-17 23:38:34.942318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:11.433 [2024-11-17 23:38:34.942337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:11.433 [2024-11-17 23:38:34.942346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:11.433 [2024-11-17 23:38:34.942354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:11.433 [2024-11-17 23:38:34.942363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:11.433 [2024-11-17 23:38:34.942372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:11.433 [2024-11-17 23:38:34.942381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:11.433 [2024-11-17 23:38:34.942389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:11.433 [2024-11-17 23:38:34.942398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:11.433 [2024-11-17 23:38:34.942408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:11.433 [2024-11-17 23:38:34.942417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:11.433 [2024-11-17 23:38:34.942425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:11.433 [2024-11-17 23:38:34.942444] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:11.433 [2024-11-17 23:38:34.942461] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 86b6b1dc-b32b-40df-8af2-a502fed8f36d 00:33:11.433 [2024-11-17 23:38:34.942471] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:33:11.433 [2024-11-17 23:38:34.942480] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1568 00:33:11.433 [2024-11-17 23:38:34.942488] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 1536 00:33:11.433 [2024-11-17 23:38:34.942505] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0208 00:33:11.433 [2024-11-17 23:38:34.942518] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:11.433 [2024-11-17 23:38:34.942535] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:11.433 [2024-11-17 23:38:34.942551] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:11.433 [2024-11-17 23:38:34.942563] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:11.433 [2024-11-17 23:38:34.942574] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:11.433 [2024-11-17 23:38:34.942587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.433 [2024-11-17 23:38:34.942603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:11.433 [2024-11-17 23:38:34.942621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.212 ms 00:33:11.433 [2024-11-17 23:38:34.942638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.433 [2024-11-17 23:38:34.945682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.433 [2024-11-17 23:38:34.945711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:11.433 [2024-11-17 23:38:34.945721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.011 ms 00:33:11.433 [2024-11-17 23:38:34.945736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.433 [2024-11-17 23:38:34.945857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.433 [2024-11-17 23:38:34.945865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:11.433 [2024-11-17 23:38:34.945873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:33:11.433 [2024-11-17 23:38:34.945922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.433 [2024-11-17 23:38:34.953842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.433 [2024-11-17 23:38:34.953898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:11.433 [2024-11-17 23:38:34.953925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.433 [2024-11-17 23:38:34.953933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.433 [2024-11-17 23:38:34.954003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.433 [2024-11-17 23:38:34.954012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:11.433 [2024-11-17 23:38:34.954021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.433 [2024-11-17 23:38:34.954030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.433 [2024-11-17 23:38:34.954096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.433 [2024-11-17 23:38:34.954107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:11.433 [2024-11-17 23:38:34.954119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.433 [2024-11-17 23:38:34.954128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.433 [2024-11-17 23:38:34.954145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.433 [2024-11-17 23:38:34.954154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:11.433 [2024-11-17 23:38:34.954162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.433 [2024-11-17 23:38:34.954170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.433 [2024-11-17 23:38:34.967976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.433 [2024-11-17 23:38:34.968029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:11.433 [2024-11-17 23:38:34.968039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.433 [2024-11-17 23:38:34.968048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.433 [2024-11-17 23:38:34.980331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.433 [2024-11-17 23:38:34.980384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:11.433 [2024-11-17 23:38:34.980395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.433 [2024-11-17 23:38:34.980408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.433 [2024-11-17 23:38:34.980461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.433 [2024-11-17 23:38:34.980475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:11.433 [2024-11-17 23:38:34.980484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.433 [2024-11-17 23:38:34.980496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.433 [2024-11-17 23:38:34.980534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.433 [2024-11-17 23:38:34.980545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:11.433 [2024-11-17 23:38:34.980554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.433 [2024-11-17 23:38:34.980562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.433 [2024-11-17 23:38:34.980615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.433 [2024-11-17 23:38:34.980625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:11.433 [2024-11-17 23:38:34.980638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.433 [2024-11-17 23:38:34.980646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.433 [2024-11-17 23:38:34.980673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.433 [2024-11-17 23:38:34.980682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:11.433 [2024-11-17 23:38:34.980691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.433 [2024-11-17 23:38:34.980699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.433 [2024-11-17 23:38:34.980740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.433 [2024-11-17 23:38:34.980750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:11.433 [2024-11-17 23:38:34.980758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.433 [2024-11-17 23:38:34.980766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.433 [2024-11-17 23:38:34.980814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.433 [2024-11-17 23:38:34.980824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:11.433 [2024-11-17 23:38:34.980832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.433 [2024-11-17 23:38:34.980840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.434 [2024-11-17 23:38:34.981051] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 41.889 ms, result 0 00:33:11.434 00:33:11.434 00:33:11.434 23:38:35 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:13.978 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:33:13.978 23:38:37 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:33:13.978 23:38:37 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:33:13.978 23:38:37 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:33:13.978 23:38:37 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:13.978 23:38:37 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:13.978 Process with pid 92510 is not found 00:33:13.978 Remove shared memory files 00:33:13.978 23:38:37 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 92510 00:33:13.978 23:38:37 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 92510 ']' 00:33:13.978 23:38:37 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 92510 00:33:13.978 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (92510) - No such process 00:33:13.978 23:38:37 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 92510 is not found' 00:33:13.978 23:38:37 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:33:13.978 23:38:37 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:13.978 23:38:37 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:33:13.978 23:38:37 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_86b6b1dc-b32b-40df-8af2-a502fed8f36d_band_md /dev/hugepages/ftl_86b6b1dc-b32b-40df-8af2-a502fed8f36d_l2p_l1 /dev/hugepages/ftl_86b6b1dc-b32b-40df-8af2-a502fed8f36d_l2p_l2 /dev/hugepages/ftl_86b6b1dc-b32b-40df-8af2-a502fed8f36d_l2p_l2_ctx /dev/hugepages/ftl_86b6b1dc-b32b-40df-8af2-a502fed8f36d_nvc_md /dev/hugepages/ftl_86b6b1dc-b32b-40df-8af2-a502fed8f36d_p2l_pool /dev/hugepages/ftl_86b6b1dc-b32b-40df-8af2-a502fed8f36d_sb /dev/hugepages/ftl_86b6b1dc-b32b-40df-8af2-a502fed8f36d_sb_shm /dev/hugepages/ftl_86b6b1dc-b32b-40df-8af2-a502fed8f36d_trim_bitmap /dev/hugepages/ftl_86b6b1dc-b32b-40df-8af2-a502fed8f36d_trim_log /dev/hugepages/ftl_86b6b1dc-b32b-40df-8af2-a502fed8f36d_trim_md /dev/hugepages/ftl_86b6b1dc-b32b-40df-8af2-a502fed8f36d_vmap 00:33:13.978 23:38:37 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:33:13.978 23:38:37 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:13.978 23:38:37 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:33:13.978 00:33:13.978 real 4m51.983s 00:33:13.978 user 4m40.675s 00:33:13.978 sys 0m11.078s 00:33:13.978 23:38:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:13.978 ************************************ 00:33:13.978 END TEST ftl_restore_fast 00:33:13.978 23:38:37 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:33:13.978 ************************************ 00:33:13.978 23:38:37 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:33:13.978 23:38:37 ftl -- ftl/ftl.sh@14 -- # killprocess 83441 00:33:13.978 23:38:37 ftl -- common/autotest_common.sh@954 -- # '[' -z 83441 ']' 00:33:13.978 Process with pid 83441 is not found 00:33:13.978 23:38:37 ftl -- common/autotest_common.sh@958 -- # kill -0 83441 00:33:13.978 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (83441) - No such process 00:33:13.978 23:38:37 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 83441 is not found' 00:33:13.978 23:38:37 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:33:13.978 23:38:37 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=95495 00:33:13.978 23:38:37 ftl -- ftl/ftl.sh@20 -- # waitforlisten 95495 00:33:13.978 23:38:37 ftl -- common/autotest_common.sh@835 -- # '[' -z 95495 ']' 00:33:13.978 23:38:37 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:13.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:13.978 23:38:37 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:13.978 23:38:37 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:13.978 23:38:37 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:13.978 23:38:37 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:13.978 23:38:37 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:13.978 [2024-11-17 23:38:37.557274] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:33:13.978 [2024-11-17 23:38:37.557399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95495 ] 00:33:13.978 [2024-11-17 23:38:37.701115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:13.978 [2024-11-17 23:38:37.730489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:14.936 23:38:38 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:14.936 23:38:38 ftl -- common/autotest_common.sh@868 -- # return 0 00:33:14.936 23:38:38 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:33:14.936 nvme0n1 00:33:14.936 23:38:38 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:33:14.936 23:38:38 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:14.936 23:38:38 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:33:15.197 23:38:38 ftl -- ftl/common.sh@28 -- # stores=b1e8b18b-abde-4e30-aaf7-4a0b2656f704 00:33:15.197 23:38:38 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:33:15.197 23:38:38 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b1e8b18b-abde-4e30-aaf7-4a0b2656f704 00:33:15.458 23:38:39 ftl -- ftl/ftl.sh@23 -- # killprocess 95495 00:33:15.458 23:38:39 ftl -- common/autotest_common.sh@954 -- # '[' -z 95495 ']' 00:33:15.458 23:38:39 ftl -- common/autotest_common.sh@958 -- # kill -0 95495 00:33:15.458 23:38:39 ftl -- common/autotest_common.sh@959 -- # uname 00:33:15.458 23:38:39 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:15.458 23:38:39 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95495 00:33:15.458 23:38:39 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:15.458 23:38:39 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:15.458 killing process with pid 95495 00:33:15.458 23:38:39 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95495' 00:33:15.458 23:38:39 ftl -- common/autotest_common.sh@973 -- # kill 95495 00:33:15.458 23:38:39 ftl -- common/autotest_common.sh@978 -- # wait 95495 00:33:15.731 23:38:39 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:33:15.991 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:15.991 Waiting for block devices as requested 00:33:15.991 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:33:16.251 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:33:16.251 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:33:16.251 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:33:21.537 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:33:21.537 Remove shared memory files 00:33:21.537 23:38:45 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:33:21.537 23:38:45 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:21.537 23:38:45 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:33:21.537 23:38:45 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:33:21.537 23:38:45 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:33:21.537 23:38:45 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:21.537 23:38:45 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:33:21.537 ************************************ 00:33:21.537 END TEST ftl 00:33:21.537 ************************************ 00:33:21.537 00:33:21.537 real 18m10.632s 00:33:21.537 user 19m59.142s 00:33:21.537 sys 1m17.138s 00:33:21.537 23:38:45 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:21.537 23:38:45 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:21.537 23:38:45 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:33:21.537 23:38:45 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:33:21.537 23:38:45 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:33:21.537 23:38:45 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:33:21.537 23:38:45 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:33:21.537 23:38:45 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:33:21.537 23:38:45 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:33:21.537 23:38:45 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:33:21.537 23:38:45 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:33:21.537 23:38:45 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:33:21.537 23:38:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:21.537 23:38:45 -- common/autotest_common.sh@10 -- # set +x 00:33:21.537 23:38:45 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:33:21.537 23:38:45 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:33:21.537 23:38:45 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:33:21.537 23:38:45 -- common/autotest_common.sh@10 -- # set +x 00:33:22.927 INFO: APP EXITING 00:33:22.927 INFO: killing all VMs 00:33:22.927 INFO: killing vhost app 00:33:22.927 INFO: EXIT DONE 00:33:23.189 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:23.760 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:33:23.760 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:33:23.760 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:33:23.760 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:33:24.021 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:24.596 Cleaning 00:33:24.596 Removing: /var/run/dpdk/spdk0/config 00:33:24.596 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:24.596 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:24.596 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:24.596 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:24.596 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:24.596 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:24.596 Removing: /var/run/dpdk/spdk0 00:33:24.596 Removing: /var/run/dpdk/spdk_pid68950 00:33:24.596 Removing: /var/run/dpdk/spdk_pid69113 00:33:24.596 Removing: /var/run/dpdk/spdk_pid69309 00:33:24.596 Removing: /var/run/dpdk/spdk_pid69391 00:33:24.596 Removing: /var/run/dpdk/spdk_pid69420 00:33:24.596 Removing: /var/run/dpdk/spdk_pid69531 00:33:24.596 Removing: /var/run/dpdk/spdk_pid69549 00:33:24.596 Removing: /var/run/dpdk/spdk_pid69726 00:33:24.596 Removing: /var/run/dpdk/spdk_pid69800 00:33:24.596 Removing: /var/run/dpdk/spdk_pid69879 00:33:24.596 Removing: /var/run/dpdk/spdk_pid69979 00:33:24.596 Removing: /var/run/dpdk/spdk_pid70060 00:33:24.596 Removing: /var/run/dpdk/spdk_pid70094 00:33:24.596 Removing: /var/run/dpdk/spdk_pid70130 00:33:24.596 Removing: /var/run/dpdk/spdk_pid70201 00:33:24.596 Removing: /var/run/dpdk/spdk_pid70274 00:33:24.596 Removing: /var/run/dpdk/spdk_pid70686 00:33:24.596 Removing: /var/run/dpdk/spdk_pid70733 00:33:24.596 Removing: /var/run/dpdk/spdk_pid70780 00:33:24.596 Removing: /var/run/dpdk/spdk_pid70796 00:33:24.596 Removing: /var/run/dpdk/spdk_pid70854 00:33:24.596 Removing: /var/run/dpdk/spdk_pid70870 00:33:24.596 Removing: /var/run/dpdk/spdk_pid70928 00:33:24.596 Removing: /var/run/dpdk/spdk_pid70944 00:33:24.596 Removing: /var/run/dpdk/spdk_pid70986 00:33:24.596 Removing: /var/run/dpdk/spdk_pid71004 00:33:24.596 Removing: /var/run/dpdk/spdk_pid71046 00:33:24.596 Removing: /var/run/dpdk/spdk_pid71064 00:33:24.596 Removing: /var/run/dpdk/spdk_pid71191 00:33:24.596 Removing: /var/run/dpdk/spdk_pid71222 00:33:24.596 Removing: /var/run/dpdk/spdk_pid71300 00:33:24.596 Removing: /var/run/dpdk/spdk_pid71461 00:33:24.597 Removing: /var/run/dpdk/spdk_pid71534 00:33:24.597 Removing: /var/run/dpdk/spdk_pid71565 00:33:24.597 Removing: /var/run/dpdk/spdk_pid71987 00:33:24.597 Removing: /var/run/dpdk/spdk_pid72080 00:33:24.597 Removing: /var/run/dpdk/spdk_pid72191 00:33:24.597 Removing: /var/run/dpdk/spdk_pid72227 00:33:24.597 Removing: /var/run/dpdk/spdk_pid72253 00:33:24.597 Removing: /var/run/dpdk/spdk_pid72331 00:33:24.597 Removing: /var/run/dpdk/spdk_pid72944 00:33:24.597 Removing: /var/run/dpdk/spdk_pid72975 00:33:24.597 Removing: /var/run/dpdk/spdk_pid73426 00:33:24.597 Removing: /var/run/dpdk/spdk_pid73519 00:33:24.597 Removing: /var/run/dpdk/spdk_pid73617 00:33:24.597 Removing: /var/run/dpdk/spdk_pid73659 00:33:24.597 Removing: /var/run/dpdk/spdk_pid73679 00:33:24.597 Removing: /var/run/dpdk/spdk_pid73705 00:33:24.597 Removing: /var/run/dpdk/spdk_pid75540 00:33:24.597 Removing: /var/run/dpdk/spdk_pid75661 00:33:24.597 Removing: /var/run/dpdk/spdk_pid75665 00:33:24.597 Removing: /var/run/dpdk/spdk_pid75677 00:33:24.597 Removing: /var/run/dpdk/spdk_pid75727 00:33:24.597 Removing: /var/run/dpdk/spdk_pid75731 00:33:24.597 Removing: /var/run/dpdk/spdk_pid75743 00:33:24.597 Removing: /var/run/dpdk/spdk_pid75783 00:33:24.597 Removing: /var/run/dpdk/spdk_pid75787 00:33:24.597 Removing: /var/run/dpdk/spdk_pid75799 00:33:24.597 Removing: /var/run/dpdk/spdk_pid75838 00:33:24.597 Removing: /var/run/dpdk/spdk_pid75842 00:33:24.597 Removing: /var/run/dpdk/spdk_pid75854 00:33:24.597 Removing: /var/run/dpdk/spdk_pid77215 00:33:24.597 Removing: /var/run/dpdk/spdk_pid77301 00:33:24.597 Removing: /var/run/dpdk/spdk_pid78699 00:33:24.597 Removing: /var/run/dpdk/spdk_pid80064 00:33:24.597 Removing: /var/run/dpdk/spdk_pid80125 00:33:24.597 Removing: /var/run/dpdk/spdk_pid80184 00:33:24.597 Removing: /var/run/dpdk/spdk_pid80235 00:33:24.597 Removing: /var/run/dpdk/spdk_pid80317 00:33:24.597 Removing: /var/run/dpdk/spdk_pid80385 00:33:24.597 Removing: /var/run/dpdk/spdk_pid80522 00:33:24.597 Removing: /var/run/dpdk/spdk_pid80871 00:33:24.862 Removing: /var/run/dpdk/spdk_pid80891 00:33:24.862 Removing: /var/run/dpdk/spdk_pid81334 00:33:24.862 Removing: /var/run/dpdk/spdk_pid81507 00:33:24.862 Removing: /var/run/dpdk/spdk_pid81594 00:33:24.862 Removing: /var/run/dpdk/spdk_pid81693 00:33:24.862 Removing: /var/run/dpdk/spdk_pid81735 00:33:24.862 Removing: /var/run/dpdk/spdk_pid81755 00:33:24.862 Removing: /var/run/dpdk/spdk_pid82051 00:33:24.862 Removing: /var/run/dpdk/spdk_pid82088 00:33:24.862 Removing: /var/run/dpdk/spdk_pid82134 00:33:24.862 Removing: /var/run/dpdk/spdk_pid82509 00:33:24.862 Removing: /var/run/dpdk/spdk_pid82647 00:33:24.862 Removing: /var/run/dpdk/spdk_pid83441 00:33:24.862 Removing: /var/run/dpdk/spdk_pid83562 00:33:24.862 Removing: /var/run/dpdk/spdk_pid83721 00:33:24.862 Removing: /var/run/dpdk/spdk_pid83796 00:33:24.862 Removing: /var/run/dpdk/spdk_pid84093 00:33:24.862 Removing: /var/run/dpdk/spdk_pid84390 00:33:24.862 Removing: /var/run/dpdk/spdk_pid84731 00:33:24.862 Removing: /var/run/dpdk/spdk_pid84908 00:33:24.862 Removing: /var/run/dpdk/spdk_pid85030 00:33:24.862 Removing: /var/run/dpdk/spdk_pid85066 00:33:24.862 Removing: /var/run/dpdk/spdk_pid85285 00:33:24.862 Removing: /var/run/dpdk/spdk_pid85299 00:33:24.862 Removing: /var/run/dpdk/spdk_pid85335 00:33:24.862 Removing: /var/run/dpdk/spdk_pid85558 00:33:24.862 Removing: /var/run/dpdk/spdk_pid85787 00:33:24.862 Removing: /var/run/dpdk/spdk_pid86281 00:33:24.862 Removing: /var/run/dpdk/spdk_pid86989 00:33:24.862 Removing: /var/run/dpdk/spdk_pid87565 00:33:24.862 Removing: /var/run/dpdk/spdk_pid88333 00:33:24.862 Removing: /var/run/dpdk/spdk_pid88476 00:33:24.862 Removing: /var/run/dpdk/spdk_pid88549 00:33:24.862 Removing: /var/run/dpdk/spdk_pid88926 00:33:24.862 Removing: /var/run/dpdk/spdk_pid88979 00:33:24.862 Removing: /var/run/dpdk/spdk_pid89951 00:33:24.862 Removing: /var/run/dpdk/spdk_pid90592 00:33:24.862 Removing: /var/run/dpdk/spdk_pid91545 00:33:24.862 Removing: /var/run/dpdk/spdk_pid91667 00:33:24.862 Removing: /var/run/dpdk/spdk_pid91703 00:33:24.862 Removing: /var/run/dpdk/spdk_pid91758 00:33:24.862 Removing: /var/run/dpdk/spdk_pid91810 00:33:24.862 Removing: /var/run/dpdk/spdk_pid91863 00:33:24.862 Removing: /var/run/dpdk/spdk_pid92041 00:33:24.862 Removing: /var/run/dpdk/spdk_pid92110 00:33:24.862 Removing: /var/run/dpdk/spdk_pid92178 00:33:24.862 Removing: /var/run/dpdk/spdk_pid92249 00:33:24.862 Removing: /var/run/dpdk/spdk_pid92284 00:33:24.862 Removing: /var/run/dpdk/spdk_pid92359 00:33:24.862 Removing: /var/run/dpdk/spdk_pid92510 00:33:24.862 Removing: /var/run/dpdk/spdk_pid92713 00:33:24.862 Removing: /var/run/dpdk/spdk_pid93324 00:33:24.862 Removing: /var/run/dpdk/spdk_pid93993 00:33:24.862 Removing: /var/run/dpdk/spdk_pid94763 00:33:24.862 Removing: /var/run/dpdk/spdk_pid95495 00:33:24.862 Clean 00:33:24.862 23:38:48 -- common/autotest_common.sh@1453 -- # return 0 00:33:24.862 23:38:48 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:33:24.862 23:38:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:24.862 23:38:48 -- common/autotest_common.sh@10 -- # set +x 00:33:24.862 23:38:48 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:33:24.862 23:38:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:24.862 23:38:48 -- common/autotest_common.sh@10 -- # set +x 00:33:25.124 23:38:48 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:25.124 23:38:48 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:33:25.124 23:38:48 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:33:25.124 23:38:48 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:33:25.124 23:38:48 -- spdk/autotest.sh@398 -- # hostname 00:33:25.124 23:38:48 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:33:25.124 geninfo: WARNING: invalid characters removed from testname! 00:33:51.734 23:39:14 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:54.290 23:39:17 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:56.928 23:39:20 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:59.469 23:39:23 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:02.016 23:39:25 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:03.932 23:39:27 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:07.231 23:39:30 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:07.232 23:39:30 -- spdk/autorun.sh@1 -- $ timing_finish 00:34:07.232 23:39:30 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:34:07.232 23:39:30 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:07.232 23:39:30 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:07.232 23:39:30 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:07.232 + [[ -n 5763 ]] 00:34:07.232 + sudo kill 5763 00:34:07.243 [Pipeline] } 00:34:07.261 [Pipeline] // timeout 00:34:07.267 [Pipeline] } 00:34:07.284 [Pipeline] // stage 00:34:07.289 [Pipeline] } 00:34:07.305 [Pipeline] // catchError 00:34:07.315 [Pipeline] stage 00:34:07.318 [Pipeline] { (Stop VM) 00:34:07.331 [Pipeline] sh 00:34:07.614 + vagrant halt 00:34:10.158 ==> default: Halting domain... 00:34:16.760 [Pipeline] sh 00:34:17.046 + vagrant destroy -f 00:34:19.593 ==> default: Removing domain... 00:34:20.178 [Pipeline] sh 00:34:20.462 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:34:20.472 [Pipeline] } 00:34:20.487 [Pipeline] // stage 00:34:20.492 [Pipeline] } 00:34:20.506 [Pipeline] // dir 00:34:20.511 [Pipeline] } 00:34:20.524 [Pipeline] // wrap 00:34:20.530 [Pipeline] } 00:34:20.547 [Pipeline] // catchError 00:34:20.558 [Pipeline] stage 00:34:20.560 [Pipeline] { (Epilogue) 00:34:20.575 [Pipeline] sh 00:34:20.861 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:26.173 [Pipeline] catchError 00:34:26.176 [Pipeline] { 00:34:26.190 [Pipeline] sh 00:34:26.475 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:26.476 Artifacts sizes are good 00:34:26.486 [Pipeline] } 00:34:26.502 [Pipeline] // catchError 00:34:26.514 [Pipeline] archiveArtifacts 00:34:26.524 Archiving artifacts 00:34:26.656 [Pipeline] cleanWs 00:34:26.673 [WS-CLEANUP] Deleting project workspace... 00:34:26.673 [WS-CLEANUP] Deferred wipeout is used... 00:34:26.692 [WS-CLEANUP] done 00:34:26.694 [Pipeline] } 00:34:26.708 [Pipeline] // stage 00:34:26.713 [Pipeline] } 00:34:26.727 [Pipeline] // node 00:34:26.732 [Pipeline] End of Pipeline 00:34:26.790 Finished: SUCCESS